Wang, Yan; Rodríguez de Gil, Patricia; Chen, Yi-Hsin; Kromrey, Jeffrey D.; Kim, Eun Sook; Pham, Thanh; Nguyen, Diep; Romano, Jeanine L.
2017-01-01
Various tests to check the homogeneity of variance assumption have been proposed in the literature, yet there is no consensus as to their robustness when the assumption of normality does not hold. This simulation study evaluated the performance of 14 tests for the homogeneity of variance assumption in one-way ANOVA models in terms of Type I error…
Catalyst Deactivation: Control Relevance of Model Assumptions
Directory of Open Access Journals (Sweden)
Bernt Lie
2000-10-01
Full Text Available Two principles for describing catalyst deactivation are discussed, one based on the deactivation mechanism, the other based on the activity and catalyst age distribution. When the model is based upon activity decay, it is common to use a mean activity developed from the steady-state residence time distribution. We compare control-relevant properties of such an approach with those of a model based upon the deactivation mechanism. Using a continuous stirred tank reactor as an example, we show that the mechanistic approach and the population balance approach lead to identical models. However, common additional assumptions used for activity-based models lead to model properties that may deviate considerably from the correct one.
McCaig, Chris; Begon, Mike; Norman, Rachel; Shankland, Carron
2011-03-01
Changing scale, for example, the ability to move seamlessly from an individual-based model to a population-based model, is an important problem in many fields. In this paper, we introduce process algebra as a novel solution to this problem in the context of models of infectious disease spread. Process algebra allows us to describe a system in terms of the stochastic behaviour of individuals, and is a technique from computer science. We review the use of process algebra in biological systems, and the variety of quantitative and qualitative analysis techniques available. The analysis illustrated here solves the changing scale problem: from the individual behaviour we can rigorously derive equations to describe the mean behaviour of the system at the level of the population. The biological problem investigated is the transmission of infection, and how this relates to individual interactions.
Limiting assumptions in molecular modeling: electrostatics.
Marshall, Garland R
2013-02-01
Molecular mechanics attempts to represent intermolecular interactions in terms of classical physics. Initial efforts assumed a point charge located at the atom center and coulombic interactions. It is been recognized over multiple decades that simply representing electrostatics with a charge on each atom failed to reproduce the electrostatic potential surrounding a molecule as estimated by quantum mechanics. Molecular orbitals are not spherically symmetrical, an implicit assumption of monopole electrostatics. This perspective reviews recent evidence that requires use of multipole electrostatics and polarizability in molecular modeling.
Lightweight Graphical Models for Selectivity Estimation Without Independence Assumptions
DEFF Research Database (Denmark)
Tzoumas, Kostas; Deshpande, Amol; Jensen, Christian S.
2011-01-01
’s optimizers are frequently caused by missed correlations between attributes. We present a selectivity estimation approach that does not make the independence assumptions. By carefully using concepts from the field of graphical models, we are able to factor the joint probability distribution of all...
Simplified subsurface modelling: data assimilation and violated model assumptions
Erdal, Daniel; Lange, Natascha; Neuweiler, Insa
2017-04-01
Integrated models are gaining more and more attention in hydrological modelling as they can better represent the interaction between different compartments. Naturally, these models come along with larger numbers of unknowns and requirements on computational resources compared to stand-alone models. If large model domains are to be represented, e.g. on catchment scale, the resolution of the numerical grid needs to be reduced or the model itself needs to be simplified. Both approaches lead to a reduced ability to reproduce the present processes. This lack of model accuracy may be compensated by using data assimilation methods. In these methods observations are used to update the model states, and optionally model parameters as well, in order to reduce the model error induced by the imposed simplifications. What is unclear is whether these methods combined with strongly simplified models result in completely data-driven models or if they can even be used to make adequate predictions of the model state for times when no observations are available. In the current work we consider the combined groundwater and unsaturated zone, which can be modelled in a physically consistent way using 3D-models solving the Richards equation. For use in simple predictions, however, simpler approaches may be considered. The question investigated here is whether a simpler model, in which the groundwater is modelled as a horizontal 2D-model and the unsaturated zones as a few sparse 1D-columns, can be used within an Ensemble Kalman filter to give predictions of groundwater levels and unsaturated fluxes. This is tested under conditions where the feedback between the two model-compartments are large (e.g. shallow groundwater table) and the simplification assumptions are clearly violated. Such a case may be a steep hill-slope or pumping wells, creating lateral fluxes in the unsaturated zone, or strong heterogeneous structures creating unaccounted flows in both the saturated and unsaturated
The sufficiency assumption of the reasoned approach to action
Directory of Open Access Journals (Sweden)
David Trafimow
2015-12-01
Full Text Available The reasoned action approach to understanding and predicting behavior includes the sufficiency assumption. Although variables not included in the theory may influence behavior, these variables work through the variables in the theory. Once the reasoned action variables are included in an analysis, the inclusion of other variables will not increase the variance accounted for in behavioral intentions or behavior. Reasoned action researchers are very concerned with testing if new variables account for variance (or how much traditional variables account for variance, to see whether they are important, in general or with respect to specific behaviors under investigation. But this approach tacitly assumes that accounting for variance is highly relevant to understanding the production of variance, which is what really is at issue. Based on the variance law, I question this assumption.
The sufficiency assumption of the reasoned approach to action
David Trafimow
2015-01-01
The reasoned action approach to understanding and predicting behavior includes the sufficiency assumption. Although variables not included in the theory may influence behavior, these variables work through the variables in the theory. Once the reasoned action variables are included in an analysis, the inclusion of other variables will not increase the variance accounted for in behavioral intentions or behavior. Reasoned action researchers are very concerned with testing if new variables accou...
Evaluating risk factor assumptions: a simulation-based approach
Directory of Open Access Journals (Sweden)
Miglioretti Diana L
2011-09-01
Full Text Available Abstract Background Microsimulation models are an important tool for estimating the comparative effectiveness of interventions through prediction of individual-level disease outcomes for a hypothetical population. To estimate the effectiveness of interventions targeted toward high risk groups, the mechanism by which risk factors influence the natural history of disease must be specified. We propose a method for evaluating these risk factor assumptions as part of model-building. Methods We used simulation studies to examine the impact of risk factor assumptions on the relative rate (RR of colorectal cancer (CRC incidence and mortality for a cohort with a risk factor compared to a cohort without the risk factor using an extension of the CRC-SPIN model for colorectal cancer. We also compared the impact of changing age at initiation of screening colonoscopy for different risk mechanisms. Results Across CRC-specific risk factor mechanisms, the RR of CRC incidence and mortality decreased (towards one with increasing age. The rate of change in RRs across age groups depended on both the risk factor mechanism and the strength of the risk factor effect. Increased non-CRC mortality attenuated the effect of CRC-specific risk factors on the RR of CRC when both were present. For each risk factor mechanism, earlier initiation of screening resulted in more life years gained, though the magnitude of life years gained varied across risk mechanisms. Conclusions Simulation studies can provide insight into both the effect of risk factor assumptions on model predictions and the type of data needed to calibrate risk factor models.
Contemporary assumptions on human nature and work and approach to human potential managing
Directory of Open Access Journals (Sweden)
Vujić Dobrila
2006-01-01
Full Text Available A general problem of this research is to identify if there is a relationship between the assumption on human nature and work (Mcgregor, Argyris, Schein, Steers and Porter and a general organizational model preference, as well as a mechanism of human resource management? This research was carried out in 2005/2006. The sample consisted of 317 subjects (197 managers, 105 highly educated subordinates and 15 entrepreneurs in 7 big enterprises in a group of small business enterprises differentiating in terms of the entrepreneur’s structure and a type of activity. A general hypothesis "that assumptions on human nature and work are statistically significant in connection to the preference approach (models, of work motivation commitment", has been confirmed. A specific hypothesis have been also confirmed: ·The assumptions on a human as a rational economic being are statistically significant in correlation with only two mechanisms of traditional models, the mechanism of method work control and the working discipline mechanism. ·Statistically significant assumptions on a human as a social being are correlated with all mechanisms of engaging employees, which belong to the model of the human relations, except the mechanism introducing the adequate type of prizes for all employees independently of working results. ·The assumptions on a human as a creative being are statistically significant, positively correlating with preference of two mechanisms belonging to the human resource model by investing into education and training and making conditions for the application of knowledge and skills. The young with assumptions on a human as a creative being prefer much broader repertoire of mechanisms belonging to the human resources model from the remaining category of subjects in the pattern. The connection between the assumption on human nature and preference models of engaging appears especially in the sub-pattern of managers, in the category of young subjects
Models for waste life cycle assessment: Review of technical assumptions
DEFF Research Database (Denmark)
Gentil, Emmanuel; Damgaard, Anders; Hauschild, Michael Zwicky
2010-01-01
A number of waste life cycle assessment (LCA) models have been gradually developed since the early 1990s, in a number of countries, usually independently from each other. Large discrepancies in results have been observed among different waste LCA models, although it has also been shown that results......, such as the functional unit, system boundaries, waste composition and energy modelling. The modelling assumptions of waste management processes, ranging from collection, transportation, intermediate facilities, recycling, thermal treatment, biological treatment, and landfilling, are obviously critical when comparing...... waste LCA models. This review infers that some of the differences in waste LCA models are inherent to the time they were developed. It is expected that models developed later, benefit from past modelling assumptions and knowledge and issues. Models developed in different countries furthermore rely...
The Impact of Modeling Assumptions in Galactic Chemical Evolution Models
Côté, Benoit; Ritter, Christian; Herwig, Falk; Venn, Kim A
2016-01-01
We use the OMEGA galactic chemical evolution code to investigate how the assumptions used for the treatment of galactic inflows and outflows impact numerical predictions. The goal is to determine how our capacity to reproduce the chemical evolution trends of a galaxy is affected by the choice of implementation used to include those physical processes. In pursuit of this goal, we experiment with three different prescriptions for galactic inflows and outflows and use OMEGA within a Markov Chain Monte Carlo code to recover the set of input parameters that best reproduces the chemical evolution of nine elements in the dwarf spheroidal galaxy Sculptor. Despite their different degrees of intended physical realism, we found that all three prescriptions can reproduce in an almost identical way the stellar abundance trends observed in Sculptor. While the three models have the same capacity to fit the data, the best values recovered for the parameters controlling the number of Type Ia supernovae and the strength of gal...
The Impact of Modeling Assumptions in Galactic Chemical Evolution Models
Côté, Benoit; O'Shea, Brian W.; Ritter, Christian; Herwig, Falk; Venn, Kim A.
2017-02-01
We use the OMEGA galactic chemical evolution code to investigate how the assumptions used for the treatment of galactic inflows and outflows impact numerical predictions. The goal is to determine how our capacity to reproduce the chemical evolution trends of a galaxy is affected by the choice of implementation used to include those physical processes. In pursuit of this goal, we experiment with three different prescriptions for galactic inflows and outflows and use OMEGA within a Markov Chain Monte Carlo code to recover the set of input parameters that best reproduces the chemical evolution of nine elements in the dwarf spheroidal galaxy Sculptor. This provides a consistent framework for comparing the best-fit solutions generated by our different models. Despite their different degrees of intended physical realism, we found that all three prescriptions can reproduce in an almost identical way the stellar abundance trends observed in Sculptor. This result supports the similar conclusions originally claimed by Romano & Starkenburg for Sculptor. While the three models have the same capacity to fit the data, the best values recovered for the parameters controlling the number of SNe Ia and the strength of galactic outflows, are substantially different and in fact mutually exclusive from one model to another. For the purpose of understanding how a galaxy evolves, we conclude that only reproducing the evolution of a limited number of elements is insufficient and can lead to misleading conclusions. More elements or additional constraints such as the Galaxy’s star-formation efficiency and the gas fraction are needed in order to break the degeneracy between the different modeling assumptions. Our results show that the successes and failures of chemical evolution models are predominantly driven by the input stellar yields, rather than by the complexity of the Galaxy model itself. Simple models such as OMEGA are therefore sufficient to test and validate stellar yields. OMEGA
Assumptions behind size-based ecosystem models are realistic
DEFF Research Database (Denmark)
Andersen, Ken Haste; Blanchard, Julia L.; Fulton, Elizabeth A.;
2016-01-01
A recent publication about balanced harvesting (Froese et al., ICES Journal of Marine Science; doi:10.1093/icesjms/fsv122) contains several erroneous statements about size-spectrum models. We refute the statements by showing that the assumptions pertaining to size-spectrum models discussed...... by Froese et al. are realistic and consistent. We further show that the assumption about density-dependence being described by a stock recruitment relationship is responsible for determining whether a peak in the cohort biomass of a population occurs late or early in life. Finally, we argue...
Unpacking Assumptions in Research Synthesis: A Critical Construct Synthesis Approach
Wolgemuth, Jennifer R.; Hicks, Tyler; Agosto, Vonzell
2017-01-01
Research syntheses in education, particularly meta-analyses and best-evidence syntheses, identify evidence-based practices by combining findings across studies whose constructs are similar enough to warrant comparison. Yet constructs come preloaded with social, historical, political, and cultural assumptions that anticipate how research problems…
Testing Modeling Assumptions in the West Africa Ebola Outbreak
Burghardt, Keith; Verzijl, Christopher; Huang, Junming; Ingram, Matthew; Song, Binyang; Hasne, Marie-Pierre
2016-01-01
The Ebola virus in West Africa has infected almost 30,000 and killed over 11,000 people. Recent models of Ebola Virus Disease (EVD) have often made assumptions about how the disease spreads, such as uniform transmissibility and homogeneous mixing within a population. In this paper, we test whether these assumptions are necessarily correct, and offer simple solutions that may improve disease model accuracy. First, we use data and models of West African migration to show that EVD does not homogeneously mix, but spreads in a predictable manner. Next, we estimate the initial growth rate of EVD within country administrative divisions and find that it significantly decreases with population density. Finally, we test whether EVD strains have uniform transmissibility through a novel statistical test, and find that certain strains appear more often than expected by chance. PMID:27721505
Testing Modeling Assumptions in the West Africa Ebola Outbreak
Burghardt, Keith; Verzijl, Christopher; Huang, Junming; Ingram, Matthew; Song, Binyang; Hasne, Marie-Pierre
2016-10-01
The Ebola virus in West Africa has infected almost 30,000 and killed over 11,000 people. Recent models of Ebola Virus Disease (EVD) have often made assumptions about how the disease spreads, such as uniform transmissibility and homogeneous mixing within a population. In this paper, we test whether these assumptions are necessarily correct, and offer simple solutions that may improve disease model accuracy. First, we use data and models of West African migration to show that EVD does not homogeneously mix, but spreads in a predictable manner. Next, we estimate the initial growth rate of EVD within country administrative divisions and find that it significantly decreases with population density. Finally, we test whether EVD strains have uniform transmissibility through a novel statistical test, and find that certain strains appear more often than expected by chance.
Cost and Performance Assumptions for Modeling Electricity Generation Technologies
Energy Technology Data Exchange (ETDEWEB)
Tidball, Rick [ICF International, Fairfax, VA (United States); Bluestein, Joel [ICF International, Fairfax, VA (United States); Rodriguez, Nick [ICF International, Fairfax, VA (United States); Knoke, Stu [ICF International, Fairfax, VA (United States)
2010-11-01
The goal of this project was to compare and contrast utility scale power plant characteristics used in data sets that support energy market models. Characteristics include both technology cost and technology performance projections to the year 2050. Cost parameters include installed capital costs and operation and maintenance (O&M) costs. Performance parameters include plant size, heat rate, capacity factor or availability factor, and plant lifetime. Conventional, renewable, and emerging electricity generating technologies were considered. Six data sets, each associated with a different model, were selected. Two of the data sets represent modeled results, not direct model inputs. These two data sets include cost and performance improvements that result from increased deployment as well as resulting capacity factors estimated from particular model runs; other data sets represent model input data. For the technologies contained in each data set, the levelized cost of energy (LCOE) was also evaluated, according to published cost, performance, and fuel assumptions.
Cost and Performance Assumptions for Modeling Electricity Generation Technologies
Energy Technology Data Exchange (ETDEWEB)
Tidball, R.; Bluestein, J.; Rodriguez, N.; Knoke, S.
2010-11-01
The goal of this project was to compare and contrast utility scale power plant characteristics used in data sets that support energy market models. Characteristics include both technology cost and technology performance projections to the year 2050. Cost parameters include installed capital costs and operation and maintenance (O&M) costs. Performance parameters include plant size, heat rate, capacity factor or availability factor, and plant lifetime. Conventional, renewable, and emerging electricity generating technologies were considered. Six data sets, each associated with a different model, were selected. Two of the data sets represent modeled results, not direct model inputs. These two data sets include cost and performance improvements that result from increased deployment as well as resulting capacity factors estimated from particular model runs; other data sets represent model input data. For the technologies contained in each data set, the levelized cost of energy (LCOE) was also evaluated, according to published cost, performance, and fuel assumptions.
PKreport: report generation for checking population pharmacokinetic model assumptions
Directory of Open Access Journals (Sweden)
Li Jun
2011-05-01
Full Text Available Abstract Background Graphics play an important and unique role in population pharmacokinetic (PopPK model building by exploring hidden structure among data before modeling, evaluating model fit, and validating results after modeling. Results The work described in this paper is about a new R package called PKreport, which is able to generate a collection of plots and statistics for testing model assumptions, visualizing data and diagnosing models. The metric system is utilized as the currency for communicating between data sets and the package to generate special-purpose plots. It provides ways to match output from diverse software such as NONMEM, Monolix, R nlme package, etc. The package is implemented with S4 class hierarchy, and offers an efficient way to access the output from NONMEM 7. The final reports take advantage of the web browser as user interface to manage and visualize plots. Conclusions PKreport provides 1 a flexible and efficient R class to store and retrieve NONMEM 7 output, 2 automate plots for users to visualize data and models, 3 automatically generated R scripts that are used to create the plots; 4 an archive-oriented management tool for users to store, retrieve and modify figures, 5 high-quality graphs based on the R packages, lattice and ggplot2. The general architecture, running environment and statistical methods can be readily extended with R class hierarchy. PKreport is free to download at http://cran.r-project.org/web/packages/PKreport/index.html.
Validating modelling assumptions of alpha particles in electrostatic turbulence
Wilkie, George; Highcock, Edmund; Dorland, William
2014-01-01
To rigorously model fast ions in fusion plasmas, a non-Maxwellian equilibrium distribution must be used. In the work, the response of high-energy alpha particles to electrostatic turbulence has been analyzed for several different tokamak parameters. Our results are consistent with known scalings and experimental evidence that alpha particles are generally well-confined: on the order of several seconds. It is also confirmed that the effect of alphas on the turbulence is negligible at realistically low concentrations, consistent with linear theory. It is demonstrated that the usual practice of using a high-temperature Maxwellian gives incorrect estimates for the radial alpha particle flux, and a method of correcting it is provided. Furthermore, we see that the timescales associated with collisions and transport compete at moderate energies, calling into question the assumption that alpha particles remain confined to a flux surface that is used in the derivation of the slowing-down distribution.
On the relevance of assumptions associated with classical factor analytic approaches.
Kasper, Daniel; Unlü, Ali
2013-01-01
A personal trait, for example a person's cognitive ability, represents a theoretical concept postulated to explain behavior. Interesting constructs are latent, that is, they cannot be observed. Latent variable modeling constitutes a methodology to deal with hypothetical constructs. Constructs are modeled as random variables and become components of a statistical model. As random variables, they possess a probability distribution in the population of reference. In applications, this distribution is typically assumed to be the normal distribution. The normality assumption may be reasonable in many cases, but there are situations where it cannot be justified. For example, this is true for criterion-referenced tests or for background characteristics of students in large scale assessment studies. Nevertheless, the normal procedures in combination with the classical factor analytic methods are frequently pursued, despite the effects of violating this "implicit" assumption are not clear in general. In a simulation study, we investigate whether classical factor analytic approaches can be instrumental in estimating the factorial structure and properties of the population distribution of a latent personal trait from educational test data, when violations of classical assumptions as the aforementioned are present. The results indicate that having a latent non-normal distribution clearly affects the estimation of the distribution of the factor scores and properties thereof. Thus, when the population distribution of a personal trait is assumed to be non-symmetric, we recommend avoiding those factor analytic approaches for estimation of a person's factor score, even though the number of extracted factors and the estimated loading matrix may not be strongly affected. An application to the Progress in International Reading Literacy Study (PIRLS) is given. Comments on possible implications for the Programme for International Student Assessment (PISA) complete the presentation.
Assumption-versus data-based approaches to summarizing species' ranges.
Peterson, A Townsend; Navarro-Sigüenza, Adolfo G; Gordillo, Alejandro
2016-08-04
For conservation decision making, species' geographic distributions are mapped using various approaches. Some such efforts have downscaled versions of coarse-resolution extent-of-occurrence maps to fine resolutions for conservation planning. We examined the quality of the extent-of-occurrence maps as range summaries and the utility of refining those maps into fine-resolution distributional hypotheses. Extent-of-occurrence maps tend to be overly simple, omit many known and well-documented populations, and likely frequently include many areas not holding populations. Refinement steps involve typological assumptions about habitat preferences and elevational ranges of species, which can introduce substantial error in estimates of species' true areas of distribution. However, no model-evaluation steps are taken to assess the predictive ability of these models, so model inaccuracies are not noticed. Whereas range summaries derived by these methods may be useful in coarse-grained, global-extent studies, their continued use in on-the-ground conservation applications at fine spatial resolutions is not advisable in light of reliance on assumptions, lack of real spatial resolution, and lack of testing. In contrast, data-driven techniques that integrate primary data on biodiversity occurrence with remotely sensed data that summarize environmental dimensions (i.e., ecological niche modeling or species distribution modeling) offer data-driven solutions based on a minimum of assumptions that can be evaluated and validated quantitatively to offer a well-founded, widely accepted method for summarizing species' distributional patterns for conservation applications. © 2016 Society for Conservation Biology.
Testing the habituation assumption underlying models of parasitoid foraging behavior
Abram, Katrina; Colazza, Stefano; Peri, Ezio
2017-01-01
Background Habituation, a form of non-associative learning, has several well-defined characteristics that apply to a wide range of physiological and behavioral responses in many organisms. In classic patch time allocation models, habituation is considered to be a major mechanistic component of parasitoid behavioral strategies. However, parasitoid behavioral responses to host cues have not previously been tested for the known, specific characteristics of habituation. Methods In the laboratory, we tested whether the foraging behavior of the egg parasitoid Trissolcus basalis shows specific characteristics of habituation in response to consecutive encounters with patches of host (Nezara viridula) chemical contact cues (footprints), in particular: (i) a training interval-dependent decline in response intensity, and (ii) a training interval-dependent recovery of the response. Results As would be expected of a habituated response, wasps trained at higher frequencies decreased their behavioral response to host footprints more quickly and to a greater degree than those trained at low frequencies, and subsequently showed a more rapid, although partial, recovery of their behavioral response to host footprints. This putative habituation learning could not be blocked by cold anesthesia, ingestion of an ATPase inhibitor, or ingestion of a protein synthesis inhibitor. Discussion Our study provides support for the assumption that diminishing responses of parasitoids to chemical indicators of host presence constitutes habituation as opposed to sensory fatigue, and provides a preliminary basis for exploring the underlying mechanisms. PMID:28321365
On the Markovian assumption in the excursion set approach: The approximation of Markov Velocities
Musso, Marcello
2014-01-01
The excursion set approach uses the statistics of the density field, smoothed on a wide range of scales, to gain insight into a number of interesting processes in nonlinear structure formation, such as cluster assembly, merging and clustering. The approach treats the curve defined by the overdensity fluctuation field when changing the smoothing scale as a random walk. Most implementations of the approach then assume that, at least to a first approximation, the walks have uncorrelated steps, so that the walk heights are a Markov process. This assumption is known to be inaccurate: smoothing filters that are most easily related to the physics of structure formation generically yield walks whose steps are correlated with one another. We develop models in which it is the steps, rather than the walk heights, that are a Markov process. In such models, which we call Markov Velocity processes, each step correlates only with the previous one. We show that TopHat smoothing of a power law power spectrum with index n = -2...
Camera traps and mark-resight models: The value of ancillary data for evaluating assumptions
Parsons, Arielle W.; Simons, Theodore R.; Pollock, Kenneth H.; Stoskopf, Michael K.; Stocking, Jessica J.; O'Connell, Allan F.
2015-01-01
Unbiased estimators of abundance and density are fundamental to the study of animal ecology and critical for making sound management decisions. Capture–recapture models are generally considered the most robust approach for estimating these parameters but rely on a number of assumptions that are often violated but rarely validated. Mark-resight models, a form of capture–recapture, are well suited for use with noninvasive sampling methods and allow for a number of assumptions to be relaxed. We used ancillary data from continuous video and radio telemetry to evaluate the assumptions of mark-resight models for abundance estimation on a barrier island raccoon (Procyon lotor) population using camera traps. Our island study site was geographically closed, allowing us to estimate real survival and in situ recruitment in addition to population size. We found several sources of bias due to heterogeneity of capture probabilities in our study, including camera placement, animal movement, island physiography, and animal behavior. Almost all sources of heterogeneity could be accounted for using the sophisticated mark-resight models developed by McClintock et al. (2009b) and this model generated estimates similar to a spatially explicit mark-resight model previously developed for this population during our study. Spatially explicit capture–recapture models have become an important tool in ecology and confer a number of advantages; however, non-spatial models that account for inherent individual heterogeneity may perform nearly as well, especially where immigration and emigration are limited. Non-spatial models are computationally less demanding, do not make implicit assumptions related to the isotropy of home ranges, and can provide insights with respect to the biological traits of the local population.
Gessler, Michael; Moreno Herrera, Lázaro
2015-01-01
The design of vocational didactics has to meet special requirements. Six core assumptions are identified: outcome orientation, cultural-historical embedding, horizontal structure, vertical structure, temporal structure, and the changing nature of work. Different approaches and discussions from school-based systems (Spain and Sweden) and dual…
Flores-Alsina, Xavier; Gernaey, Krist V; Jeppsson, Ulf
2012-01-01
This paper examines the effect of different model assumptions when describing biological nutrient removal (BNR) by the activated sludge models (ASM) 1, 2d & 3. The performance of a nitrogen removal (WWTP1) and a combined nitrogen and phosphorus removal (WWTP2) benchmark wastewater treatment plant was compared for a series of model assumptions. Three different model approaches describing BNR are considered. In the reference case, the original model implementations are used to simulate WWTP1 (ASM1 & 3) and WWTP2 (ASM2d). The second set of models includes a reactive settler, which extends the description of the non-reactive TSS sedimentation and transport in the reference case with the full set of ASM processes. Finally, the third set of models is based on including electron acceptor dependency of biomass decay rates for ASM1 (WWTP1) and ASM2d (WWTP2). The results show that incorporation of a reactive settler: (1) increases the hydrolysis of particulates; (2) increases the overall plant's denitrification efficiency by reducing the S(NOx) concentration at the bottom of the clarifier; (3) increases the oxidation of COD compounds; (4) increases X(OHO) and X(ANO) decay; and, finally, (5) increases the growth of X(PAO) and formation of X(PHA,Stor) for ASM2d, which has a major impact on the whole P removal system. Introduction of electron acceptor dependent decay leads to a substantial increase of the concentration of X(ANO), X(OHO) and X(PAO) in the bottom of the clarifier. The paper ends with a critical discussion of the influence of the different model assumptions, and emphasizes the need for a model user to understand the significant differences in simulation results that are obtained when applying different combinations of 'standard' models.
Rajabzadeh Oghaz, Hamidreza; Damiano, Robert; Meng, Hui
2015-11-01
Intracranial aneurysms (IAs) are pathological outpouchings of cerebral vessels, the progression of which are mediated by complex interactions between the blood flow and vasculature. Image-based computational fluid dynamics (CFD) has been used for decades to investigate IA hemodynamics. However, the commonly adopted simplifying assumptions in CFD (e.g. rigid wall) compromise the simulation accuracy and mask the complex physics involved in IA progression and eventual rupture. Several groups have considered the wall compliance by using fluid-structure interaction (FSI) modeling. However, FSI simulation is highly sensitive to numerical assumptions (e.g. linear-elastic wall material, Newtonian fluid, initial vessel configuration, and constant pressure outlet), the effects of which are poorly understood. In this study, a comprehensive investigation of the sensitivity of FSI simulations in patient-specific IAs is investigated using a multi-stage approach with a varying level of complexity. We start with simulations incorporating several common simplifications: rigid wall, Newtonian fluid, and constant pressure at the outlets, and then we stepwise remove these simplifications until the most comprehensive FSI simulations. Hemodynamic parameters such as wall shear stress and oscillatory shear index are assessed and compared at each stage to better understand the sensitivity of in FSI simulations for IA to model assumptions. Supported by the National Institutes of Health (1R01 NS 091075-01).
Modelling the dynamics of reasoning processes: reasoning by assumption
Jonker, C.M.; Treur, J.
2008-01-01
To model the dynamics of cognitive processes, often the Dynamical Systems Theory (DST) is advocated. However, for higher cognitive processes such as reasoning and certain forms of natural language processing the techniques adopted within DST are not very adequate. This paper shows how an analysis of
Testing the habituation assumption underlying models of parasitoid foraging behavior
Abram, Paul K.; Cusumano, Antonino; Abram, Katrina; Colazza, Stefano; Peri, Ezio
2017-01-01
Background. Habituation, a form of non-associative learning, has several well-defined characteristics that apply to a wide range of physiological and behavioral responses in many organisms. In classic patch time allocation models, habituation is considered to be a major mechanistic component of para
Gaussian versus top-hat profile assumptions in integral plume models
Davidson, G. A.
Numerous integral models describing the behaviour of buoyant plumes released into stratified crossflows have been presented in the literature. One of the differences between these models is the form assumed for the self-similar profile: some models assume a top-hat form while others assume a Gaussian. The differences between these two approaches are evaluated by (a) comparing the governing equations on which Gaussian and top-hat models are based; (b) comparing some typical plume predictions generated by each type of model over a range of model parameters. It is shown that, while the profile assumption does lead to differences in the equations which govern plume variables, the effects of these differences on actual plume predictions is small over the range of parameters of practical interest. Since the predictions of Gaussian and top-hat models are essentially equivalent, it can thus be concluded that the additional physical information incorporated into a Gaussian formulation plays only a minor role in mean plume behaviour, and that the tophat approach, which requires the numerical solution of a simpler set of equations, is adequate for most situations where an integral approach would be used.
Pesut, Barbara; Sawatzky, Rick
2006-06-01
Increasing attention is being paid to spirituality in nursing practice. Much of the literature on spiritual care uses the nursing process to describe this aspect of care. However, the use of the nursing process in the area of spirituality may be problematic, depending upon the understandings of the nature and intent of this process. Is it primarily a descriptive process meant to make visible the nursing actions to provide spiritual support, or is it a prescriptive process meant to guide nursing actions for intervening in the spirituality of patients? A prescriptive nursing process approach implies influencing, and in some cases reframing, the spirituality of patients and thereby extends beyond general notions of spiritual support. In this paper we discuss four problematic assumptions that form the basis for a prescriptive approach to spiritual care. We conclude that this approach extends the nursing role beyond appropriate professional boundaries, making it ethically problematic.
Recursive Subspace Identification of AUV Dynamic Model under General Noise Assumption
Directory of Open Access Journals (Sweden)
Zheping Yan
2014-01-01
Full Text Available A recursive subspace identification algorithm for autonomous underwater vehicles (AUVs is proposed in this paper. Due to the advantages at handling nonlinearities and couplings, the AUV model investigated here is for the first time constructed as a Hammerstein model with nonlinear feedback in the linear part. To better take the environment and sensor noises into consideration, the identification problem is concerned as an errors-in-variables (EIV one which means that the identification procedure is under general noise assumption. In order to make the algorithm recursively, propagator method (PM based subspace approach is extended into EIV framework to form the recursive identification method called PM-EIV algorithm. With several identification experiments carried out by the AUV simulation platform, the proposed algorithm demonstrates its effectiveness and feasibility.
Modelling sexual transmission of HIV: testing the assumptions, validating the predictions
Baggaley, Rebecca F.; Fraser, Christophe
2010-01-01
Purpose of review To discuss the role of mathematical models of sexual transmission of HIV: the methods used and their impact. Recent findings We use mathematical modelling of “universal test and treat” as a case study to illustrate wider issues relevant to all modelling of sexual HIV transmission. Summary Mathematical models are used extensively in HIV epidemiology to deduce the logical conclusions arising from one or more sets of assumptions. Simple models lead to broad qualitative understanding, while complex models can encode more realistic assumptions and thus be used for predictive or operational purposes. An overreliance on model analysis where assumptions are untested and input parameters cannot be estimated should be avoided. Simple models providing bold assertions have provided compelling arguments in recent public health policy, but may not adequately reflect the uncertainty inherent in the analysis. PMID:20543600
Analysis of Modeling Assumptions used in Production Cost Models for Renewable Integration Studies
Energy Technology Data Exchange (ETDEWEB)
Stoll, Brady [National Renewable Energy Lab. (NREL), Golden, CO (United States); Brinkman, Gregory [National Renewable Energy Lab. (NREL), Golden, CO (United States); Townsend, Aaron [National Renewable Energy Lab. (NREL), Golden, CO (United States); Bloom, Aaron [National Renewable Energy Lab. (NREL), Golden, CO (United States)
2016-01-01
Renewable energy integration studies have been published for many different regions exploring the question of how higher penetration of renewable energy will impact the electric grid. These studies each make assumptions about the systems they are analyzing; however the effect of many of these assumptions has not been yet been examined and published. In this paper we analyze the impact of modeling assumptions in renewable integration studies, including the optimization method used (linear or mixed-integer programming) and the temporal resolution of the dispatch stage (hourly or sub-hourly). We analyze each of these assumptions on a large and a small system and determine the impact of each assumption on key metrics including the total production cost, curtailment of renewables, CO2 emissions, and generator starts and ramps. Additionally, we identified the impact on these metrics if a four-hour ahead commitment step is included before the dispatch step and the impact of retiring generators to reduce the degree to which the system is overbuilt. We find that the largest effect of these assumptions is at the unit level on starts and ramps, particularly for the temporal resolution, and saw a smaller impact at the aggregate level on system costs and emissions. For each fossil fuel generator type we measured the average capacity started, average run-time per start, and average number of ramps. Linear programming results saw up to a 20% difference in number of starts and average run time of traditional generators, and up to a 4% difference in the number of ramps, when compared to mixed-integer programming. Utilizing hourly dispatch instead of sub-hourly dispatch saw no difference in coal or gas CC units for either start metric, while gas CT units had a 5% increase in the number of starts and 2% increase in the average on-time per start. The number of ramps decreased up to 44%. The smallest effect seen was on the CO2 emissions and total production cost, with a 0.8% and 0
Fang, L.; Sun, X. Y.; Liu, Y. W.
2016-12-01
In order to shed light on understanding the subgrid-scale (SGS) modelling methodology, we analyze and define the concepts of assumption and restriction in the modelling procedure, then show by a generalized derivation that if there are multiple stationary restrictions in a modelling, the corresponding assumption function must satisfy a criterion of orthogonality. Numerical tests using one-dimensional nonlinear advection equation are performed to validate this criterion. This study is expected to inspire future research on generally guiding the SGS modelling methodology.
Jang, Hyesuk
2014-01-01
This study aims to evaluate a multidimensional latent trait model to determine how well the model works in various empirical contexts. Contrary to the assumption of these latent trait models that the traits are normally distributed, situations in which the latent trait is not shaped with a normal distribution may occur (Sass et al, 2008; Woods…
The biosphere at Laxemar. Data, assumptions and models used in the SR-Can assessment
Energy Technology Data Exchange (ETDEWEB)
Karlsson, Sara; Kautsky, Ulrik; Loefgren, Anders; Soederbaeck, Bjoern (eds.)
2006-10-15
This is essentially a compilation of a variety of reports concerning the site investigations, the research activities and information derived from other sources important for the safety assessment. The main objective is to present prerequisites, methods and data used, in the biosphere modelling for the safety assessment SR-Can at the Laxemar site. A major part of the report focuses on how site-specific data are used, recalculated or modified in order to be applicable in the safety assessment context; and the methods and sub-models that are the basis for the biosphere modelling. Furthermore, the assumptions made as to the future states of surface ecosystems are mainly presented in this report. A similar report is provided for the Forsmark area. This report summarises the method adopted for safety assessment following a radionuclide release into the biosphere. The approach utilises the information about the site as far as possible and presents a way of calculating risk to humans. A central tool in the work is the description of the topography, where there is good understanding of the present conditions and the development over time is fairly predictable. The topography affects surface hydrology, sedimentation, size of drainage areas and the characteristics of ecosystems. Other parameters are human nutritional intake, which is assumed to be constant over time, and primary production (photosynthesis), which also is a fairly constant parameter over time. The Landscape Dose Factor approach (LDF) gives an integrated measure for the site and also resolves the issues relating to the size of the group with highest exposure. If this approach is widely accepted as method, still some improvements and refinement are necessary in collecting missing site data, reanalysing site data, reviewing radionuclide specific data, reformulating ecosystem models and evaluating the results with further sensitivity analysis.
The biosphere at Forsmark. Data, assumptions and models used in the SR-Can assessment
Energy Technology Data Exchange (ETDEWEB)
Karlsson, Sara; Kautsky, Ulrik; Loefgren, Anders; Soederbaeck, Bjoern (eds.)
2006-10-15
This report summarises the method adopted for safety assessment following a radionuclide release into the biosphere. The approach utilises the information about the site as far as possible and presents a way of calculating risk to humans. The parameters are topography, where there is good understanding of the present conditions and the development over time is fairly predictable. The topography affects surface hydrology, sedimentation, size of drainage areas and the characteristics of ecosystems. Other parameters are human nutritional intake, which is assumed to be constant over time, and primary production (photosynthesis), which also is a fairly constant parameter over time. The Landscape Dose Factor approach (LDF) gives an integrated measure for the site and also resolves the issues relating to the size of the group with highest exposure. If this approach is widely accepted as method, still some improvements and refinement are necessary, e.g. collecting missing site data, reanalysing site data, reviewing radionuclide specific data, reformulating ecosystem models and evaluating the results with further sensitivity analysis. The report presents descriptions and estimates not presented elsewhere, as well as summaries of important steps in the biosphere modelling that are presented in more detail in separate reports. The intention is to give the reader a coherent description of the steps taken to calculate doses to biota and humans, including a description of the data used, the rationale for a number of assumptions made during parameterisation, and of how the landscape context is applied in the modelling, and also to present the models used and the results obtained.
Taliotis, Constantinos; Taibi, Emanuele; Howells, Mark; Rogner, Holger; Bazilian, Morgan; Welsch, Manuel
2017-10-01
The generation mix of Cyprus has been dominated by oil products for decades. In order to conform with European Union and international legislation, a transformation of the supply system is called for. Energy system models can facilitate energy planning into the future, but a large volume of data is required to populate such models. The present data article provides information on key modelling assumptions and input data adopted with the aim of representing the electricity supply system of Cyprus in a separate research article. Data in regards to renewable energy technoeconomic characteristics and investment cost projections, fossil fuel price projections, storage technology characteristics and system operation assumptions are described in this article.
DEFF Research Database (Denmark)
Flores-Alsina, Xavier; Gernaey, Krist V.; Jeppsson, Ulf
2012-01-01
This paper examines the effect of different model assumptions when describing biological nutrient removal (BNR) by the activated sludge models (ASM) 1, 2d & 3. The performance of a nitrogen removal (WWTP1) and a combined nitrogen and phosphorus removal (WWTP2) benchmark wastewater treatment plant...
Investigating assumptions of crown archetypes for modelling LiDAR returns
Calders, K.; Lewis, P.; Disney, M.; Verbesselt, J.; Herold, M.
2013-01-01
LiDAR has the potential to derive canopy structural information such as tree height and leaf area index (LAI), via models of the LiDAR signal. Such models often make assumptions regarding crown shape to simplify parameter retrieval and crown archetypes are typically assumed to contain a turbid
Investigating assumptions of crown archetypes for modelling LiDAR returns
Calders, K.; Lewis, P.; Disney, M.; Verbesselt, J.; Herold, M.
2013-01-01
LiDAR has the potential to derive canopy structural information such as tree height and leaf area index (LAI), via models of the LiDAR signal. Such models often make assumptions regarding crown shape to simplify parameter retrieval and crown archetypes are typically assumed to contain a turbid mediu
A Taxonomy of Latent Structure Assumptions for Probability Matrix Decomposition Models.
Meulders, Michel; De Boeck, Paul; Van Mechelen, Iven
2003-01-01
Proposed a taxonomy of latent structure assumptions for probability matrix decomposition (PMD) that includes the original PMD model and a three-way extension of the multiple classification latent class model. Simulation study results show the usefulness of the taxonomy. (SLD)
IRT models with relaxed assumptions in eRm: A manual-like instruction
Directory of Open Access Journals (Sweden)
REINHOLD HATZINGER
2009-03-01
Full Text Available Linear logistic models with relaxed assumptions (LLRA as introduced by Fischer (1974 are a flexible tool for the measurement of change for dichotomous or polytomous responses. As opposed to the Rasch model, assumptions on dimensionality of items, their mutual dependencies and the distribution of the latent trait in the population of subjects are relaxed. Conditional maximum likelihood estimation allows for inference about treatment, covariate or trend effect parameters without taking the subjects' latent trait values into account. In this paper we will show how LLRAs based on the LLTM, LRSM and LPCM can be used to answer various questions about the measurement of change and how they can be fitted in R using the eRm package. A number of small didactic examples is provided that can easily be used as templates for real data sets. All datafiles used in this paper are available from http://eRm.R-Forge.R-project.org/
Tran, Van; McCall, Matthew N.; McMurray, Helene R.; Almudevar, Anthony
2013-01-01
Boolean networks (BoN) are relatively simple and interpretable models of gene regulatory networks. Specifying these models with fewer parameters while retaining their ability to describe complex regulatory relationships is an ongoing methodological challenge. Additionally, extending these models to incorporate variable gene decay rates, asynchronous gene response, and synergistic regulation while maintaining their Markovian nature increases the applicability of these models to genetic regulatory networks (GRN). We explore a previously-proposed class of BoNs characterized by linear threshold functions, which we refer to as threshold Boolean networks (TBN). Compared to traditional BoNs with unconstrained transition functions, these models require far fewer parameters and offer a more direct interpretation. However, the functional form of a TBN does result in a reduction in the regulatory relationships which can be modeled. We show that TBNs can be readily extended to permit self-degradation, with explicitly modeled degradation rates. We note that the introduction of variable degradation compromises the Markovian property fundamental to BoN models but show that a simple state augmentation procedure restores their Markovian nature. Next, we study the effect of assumptions regarding self-degradation on the set of possible steady states. Our findings are captured in two theorems relating self-degradation and regulatory feedback to the steady state behavior of a TBN. Finally, we explore assumptions of synchronous gene response and asynergistic regulation and show that TBNs can be easily extended to relax these assumptions. Applying our methods to the budding yeast cell-cycle network revealed that although the network is complex, its steady state is simplified by the presence of self-degradation and lack of purely positive regulatory cycles. PMID:24376454
Directory of Open Access Journals (Sweden)
Van eTran
2013-12-01
Full Text Available Boolean networks (BoN are relatively simple and interpretable models of gene regulatorynetworks. Specifying these models with fewer parameters while retaining their ability to describe complex regulatory relationships is an ongoing methodological challenge. Additionally, extending these models to incorporate variable gene decay rates, asynchronous gene response, and synergistic regulation while maintaining their Markovian nature increases the applicability of these models to genetic regulatory networks.We explore a previously-proposed class of BoNs characterized by linear threshold functions, which we refer to as threshold Boolean networks (TBN. Compared to traditional BoNs with unconstrained transition functions, these models require far fewer parameters and offer a more direct interpretation. However, the functional form of a TBN does result in a reduction in the regulatory relationships which can be modeled.We show that TBNs can be readily extended to permit self-degradation, with explicitly modeled degradation rates. We note that the introduction of variable degradation compromises the Markovian property fundamental to BoN models but show that a simple state augmentation procedure restores their Markovian nature. Next, we study the effect of assumptions regarding self-degradation on the set of possible steady states. Our findings are captured in two theorems relating self-degradation and regulatory feedback to the steady state behavior of a TBN. Finally, we explore assumptions of synchronous gene response and asynergistic regulation and show that TBNs can be easily extended to relax these assumptions.Applying our methods to the budding yeast cell-cycle network revealed that although the network is complex, its steady state is simplified by the presence of self-degradation and lack of purely positive regulatory cycles.
Condition for Energy Efficient Watermarking with Random Vector Model without WSS Assumption
Yan, Bin; Guo, Yinjing
2009-01-01
Energy efficient watermarking preserves the watermark energy after linear attack as much as possible. We consider in this letter non-stationary signal models and derive conditions for energy efficient watermarking under random vector model without WSS assumption. We find that the covariance matrix of the energy efficient watermark should be proportional to host covariance matrix to best resist the optimal linear removal attacks. In WSS process our result reduces to the well known power spectrum condition. Intuitive geometric interpretation of the results are also discussed which in turn also provide more simpler proof of the main results.
Differentiating Different Modeling Assumptions in Simulations of MagLIF loads on the Z Generator
Jennings, C. A.; Gomez, M. R.; Harding, E. C.; Knapp, P. F.; Ampleford, D. J.; Hansen, S. B.; Weis, M. R.; Glinsky, M. E.; Peterson, K.; Chittenden, J. P.
2016-10-01
Metal liners imploded by a fast rising (MagLIF experiments have had some success. While experiments are increasingly well diagnosed, many of the measurements (particularly during stagnation) are time integrated, limited in spatial resolution or require additional assumptions to interpret in the context of a structured, rapidly evolving system. As such, in validating MHD calculations, there is the potential for the same observables in the experimental data to be reproduced under different modeling assumptions. Using synthetic diagnostics of the results of different pre-heat, implosion and stagnation simulations run with the Gorgon MHD code, we discuss how the interpretation of typical Z diagnostics relate to more fundamental simulation parameters. We then explore the extent to which different assumptions on instability development, current delivery, high-Z mix into the fuel and initial laser deposition can be differentiated in our existing measurements. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. DoE's NNSA under contract DE-AC04-94AL85000.
Meso-scale modeling: beyond local equilibrium assumption for multiphase flow
Wang, Wei
2015-01-01
This is a summary of the article with the same title, accepted for publication in Advances in Chemical Engineering, 47: 193-277 (2015). Gas-solid fluidization is a typical nonlinear nonequilibrium system with multiscale structure. In particular, the mesoscale structure in terms of bubbles or clusters, which can be characterized by nonequilibrium features in terms of bimodal velocity distribution, energy non equipartition, and correlated density fluctuations, is the critical factor. Traditional two-fluid model (TFM) and relevant closures depend on local equilibrium and homogeneous distribution assumptions, and fail to predict the dynamic, nonequilibrium phenomena in circulating fluidized beds even with fine-grid resolution. In contrast, the mesoscale modeling, as exemplified by the energy-minimization multiscale (EMMS) model, is consistent with the nonequilibrium features in multiphase flows. Thus, the structure-dependent multi-fluid model conservation equations with the EMMS-based mesoscale modeling greatly i...
A computational model to investigate assumptions in the headturn preference procedure
Directory of Open Access Journals (Sweden)
Christina eBergmann
2013-10-01
Full Text Available In this paper we use a computational model to investigate four assumptions that are tacitly present in interpreting the results of studies on infants' speech processing abilities using the Headturn Preference Procedure (HPP: (1 behavioural differences originate in different processing; (2 processing involves some form of recognition; (3 words are segmented from connected speech; and (4 differences between infants should not affect overall results. In addition, we investigate the impact of two potentially important aspects in the design and execution of the experiments: (a the specific voices used in the two parts on HPP experiments (familiarisation and test and (b the experimenter's criterion for what is a sufficient headturn angle. The model is designed to be maximise cognitive plausibility. It takes real speech as input, and it contains a module that converts the output of internal speech processing and recognition into headturns that can yield real-time listening preference measurements. Internal processing is based on distributed episodic representations in combination with a matching procedure based on the assumptions that complex episodes can be decomposed as positive weighted sums of simpler constituents. Model simulations show that the first assumptions hold under two different definitions of recognition. However, explicit segmentation is not necessary to simulate the behaviours observed in infant studies. Differences in attention span between infants can affect the outcomes of an experiment. The same holds for the experimenter's decision criterion. The speakers used in experiments affect outcomes in complex ways that require further investigation. The paper ends with recommendations for future studies using the HPP.
Ignasiak, Dominika; Ferguson, Stephen J; Arjmand, Navid
2016-09-06
A number of musculoskeletal models of the human spine have been used for predictions of lumbar and muscle forces. However, the predictive power of these models might be limited by a commonly made assumption; thoracic region is represented as a single lumped rigid body. This study hence aims to investigate the impact of such assumption on the predictions of spinal and muscle forces. A validated thoracolumbar spine model was used with a flexible thorax (T1-T12), a completely rigid one or rigid with thoracic posture updated at each analysis step. The simulations of isometric forward flexion up to 80°, with and without a 20kg hand load, were performed, based on the previously measured kinematics. Depending on the simulated task, the rigid model predicted slightly or moderately lower compressive loading than the flexible one. The differences were relatively greater at the upper lumbar levels (average underestimation of 14% at the T12L1 for flexion tasks and of 18% for flexion tasks with hand load) as compared to the lower levels (3% and 8% at the L5S1 for unloaded and loaded tasks, respectively). The rigid model with updated thoracic posture predicted compressive forces similar to those of the rigid model. Predicted muscle forces were, however, very different between the three models. This study indicates that the lumbar spine models with a rigid thorax definition can be used for loading investigations at the lowermost spinal levels. For predictions of upper lumbar spine loading, using models with an articulated thorax is advised. Copyright © 2016 Elsevier Ltd. All rights reserved.
Directory of Open Access Journals (Sweden)
Damian GĄSKA
2011-01-01
Full Text Available This work investigates the FEM simulation of stress and strain state of the selected trolley’s load-carrying structure with 450 tones hoisting capacity [1]. Computational loads were adopted as in standard PN-EN 13001-2. Model of trolley was built from several cooperating with each other (in contact parts. The influence of model assumptions (simplification in selected construction nodes to the value of maximum stress and strain with its area of occurrence was being analyzed. The aim of this study was to determine whether the simplification, which reduces the time required to prepare the model and perform calculations (e.g., rigid connection instead of contact are substantially changing the characteristics of the model.
Institute of Scientific and Technical Information of China (English)
LIANG Jianfeng; LI Yanming
2012-01-01
Lightning electromagnetics can affect the reliability of the power system or communication system.Therefore,evaluation of electromagnetic fields generated by lightning return stroke is indispensable.Arnold sommerfeld proposed a model to calculate the electromagnetic field,but it involved the time-consuming sommerfeld integral.However,perfect conductor ground assumption can account for fast calculation,thus this paper reviews the perfect ground equation for evaluation of lightning electromagnetic fields,presents three engineering lightning return stroke models,and calculates the horizontal electric field caused by three lightning return stroke models.According to the results,the amplitude of lightning return stroke has a strong impact on horizontal electric fields,and the steepness of lightning return stroke influences the horizontal electric fields.Moreover,the perfect ground method is faster than the sommerfeld integral method.
Continuous-discrete model of parasite-host system dynamics: Trigger regime at simplest assumptions
Directory of Open Access Journals (Sweden)
L. V. Nedorezov
2014-09-01
Full Text Available In paper continuous-discrete model of parasite-host system dynamics is analyzed. Within the framework of model it is assumed that appearance of individuals of new generations of both populations is realized at fixed time moments tk=hk, t0=0, k=1,2,... , h=const>0; it means that several processes are compressed together: producing of eggs by hosts, attack of eggs by parasites (with respective transformation of host's eggs into parasite's eggs, staying of hosts and parasites in phase "egg", and appearance of new individuals. It is also assumed that death process of individuals has a continuous nature, but developments of both populations are realized independently between fixed time moments. Dynamic regimes of model are analyzed. In particular, it was obtained that with simplest assumptions about birth process in host population and numbers of attacked hosts regime with two non-trivial stable attractors in phase space of system can be realized.
Karakaya, Jale; Karabulut, Erdem; Yucel, Recai M
Modern statistical methods using incomplete data have been increasingly applied in a wide variety of substantive problems. Similarly, receiver operating characteristic (ROC) analysis, a method used in evaluating diagnostic tests or biomarkers in medical research, has also been increasingly popular problem in both its development and application. While missing-data methods have been applied in ROC analysis, the impact of model mis-specification and/or assumptions (e.g. missing at random) underlying the missing data has not been thoroughly studied. In this work, we study the performance of multiple imputation (MI) inference in ROC analysis. Particularly, we investigate parametric and non-parametric techniques for MI inference under common missingness mechanisms. Depending on the coherency of the imputation model with the underlying data generation mechanism, our results show that MI generally leads to well-calibrated inferences under ignorable missingness mechanisms.
Loeys, Tom; Talloen, Wouter; Goubert, Liesbet; Moerkerke, Beatrijs; Vansteelandt, Stijn
2016-11-01
It is well known from the mediation analysis literature that the identification of direct and indirect effects relies on strong no unmeasured confounding assumptions of no unmeasured confounding. Even in randomized studies the mediator may still be correlated with unobserved prognostic variables that affect the outcome, in which case the mediator's role in the causal process may not be inferred without bias. In the behavioural and social science literature very little attention has been given so far to the causal assumptions required for moderated mediation analysis. In this paper we focus on the index for moderated mediation, which measures by how much the mediated effect is larger or smaller for varying levels of the moderator. We show that in linear models this index can be estimated without bias in the presence of unmeasured common causes of the moderator, mediator and outcome under certain conditions. Importantly, one can thus use the test for moderated mediation to support evidence for mediation under less stringent confounding conditions. We illustrate our findings with data from a randomized experiment assessing the impact of being primed with social deception upon observer responses to others' pain, and from an observational study of individuals who ended a romantic relationship assessing the effect of attachment anxiety during the relationship on mental distress 2 years after the break-up.
NONLINEAR MODELS FOR DESCRIPTION OF CACAO FRUIT GROWTH WITH ASSUMPTION VIOLATIONS
Directory of Open Access Journals (Sweden)
JOEL AUGUSTO MUNIZ
2017-01-01
Full Text Available Cacao (Theobroma cacao L. is an important fruit in the Brazilian economy, which is mainly cultivated in the southern State of Bahia. The optimal stage for harvesting is a major factor for fruit quality and the knowledge on its growth curves can help, especially in identifying the ideal maturation stage for harvesting. Nonlinear regression models have been widely used for description of growth curves. However, several studies in this subject do not consider the residual analysis, the existence of a possible dependence between longitudinal observations, or the sample variance heterogeneity, compromising the modeling quality. The objective of this work was to compare the fit of nonlinear regression models, considering residual analysis and assumption violations, in the description of the cacao (clone Sial-105 fruit growth. The data evaluated were extracted from Brito and Silva (1983, who conducted the experiment in the Cacao Research Center, Ilheus, State of Bahia. The variables fruit length, diameter and volume as a function of fruit age were studied. The use of weighting and incorporation of residual dependencies was efficient, since the modeling became more consistent, improving the model fit. Considering the first-order autoregressive structure, when needed, leads to significant reduction in the residual standard deviation, making the estimates more reliable. The Logistic model was the most efficient for the description of the cacao fruit growth.
Kendall, William L.; Hines, James E.; Nichols, James D.; Grant, Evan H. Campbell
2013-01-01
Occupancy statistical models that account for imperfect detection have proved very useful in several areas of ecology, including species distribution and spatial dynamics, disease ecology, and ecological responses to climate change. These models are based on the collection of multiple samples at each of a number of sites within a given season, during which it is assumed the species is either absent or present and available for detection while each sample is taken. However, for some species, individuals are only present or available for detection seasonally. We present a statistical model that relaxes the closure assumption within a season by permitting staggered entry and exit times for the species of interest at each site. Based on simulation, our open model eliminates bias in occupancy estimators and in some cases increases precision. The power to detect the violation of closure is high if detection probability is reasonably high. In addition to providing more robust estimation of occupancy, this model permits comparison of phenology across sites, species, or years, by modeling variation in arrival or departure probabilities. In a comparison of four species of amphibians in Maryland we found that two toad species arrived at breeding sites later in the season than a salamander and frog species, and departed from sites earlier.
Pipień, M.
2008-09-01
We present the results of an application of Bayesian inference in testing the relation between risk and return on the financial instruments. On the basis of the Intertemporal Capital Asset Pricing Model, proposed by Merton we built a general sampling distribution suitable in analysing this relationship. The most important feature of our assumptions is that the skewness of the conditional distribution of returns is used as an alternative source of relation between risk and return. This general specification relates to Skewed Generalized Autoregressive Conditionally Heteroscedastic-in-Mean model. In order to make conditional distribution of financial returns skewed we considered the unified approach based on the inverse probability integral transformation. In particular, we applied hidden truncation mechanism, inverse scale factors, order statistics concept, Beta and Bernstein distribution transformations and also a constructive method. Based on the daily excess returns on the Warsaw Stock Exchange Index we checked the empirical importance of the conditional skewness assumption on the relation between risk and return on the Warsaw Stock Market. We present posterior probabilities of all competing specifications as well as the posterior analysis of the positive sign of the tested relationship.
Ashby, Nathaniel J S; Jekel, Marc; Dickert, Stephan; Glöckner, Andreas
2016-12-01
Recent research makes increasing use of eye-tracking methodologies to generate and test process models. Overall, such research suggests that attention, generally indexed by fixations (gaze duration), plays a critical role in the construction of preference, although the methods used to support this supposition differ substantially. In 2 studies we empirically test prototypical versions of prominent processing assumptions against 1 another and several base models. We find that general evidence accumulation processes provide a good fit to the data. An accumulation process that assumes leakage and temporal variability in evidence weighting (i.e., a primacy effect) fits the aggregate data, both in terms of choices and decision times, and does so across varying types of choices (e.g., charitable giving and hedonic consumption) and numbers of options well. However, when comparing models on the level of the individual, for a majority of participants simpler models capture choice data better. The theoretical and practical implications of these findings are discussed. (PsycINFO Database Record
Pappas, C.; Fatichi, S.; Leuzinger, S.; Burlando, P.
2012-04-01
Dynamic vegetation models have been widely used for analyzing ecosystem dynamics and climate feedbacks. Their performance has been tested extensively against observations and by model intercomparison studies. In the present study, the LPJ-GUESS state-of-the-art ecosystem model was evaluated with respect to its structure, hypothesis, and parameterization by performing a global sensitivity analysis (GSA). The study aims at examining potential model limitations, particularly with regards to regional and watershed scale applications. A detailed GSA based on variance decomposition is presented to investigate the structural assumptions of the model and to highlight processes and parameters that cause the highest variability in the outputs. First order and total sensitivity indexes were calculated for each of the parameters using Sobol's methodology. In order to elucidate the role of climate on model sensitivity synthetic climate scenarios were generated based on climatic data from Switzerland. The results clearly indicate a very high sensitivity of LPJ-GUESS to photosynthetic parameters. Intrinsic quantum efficiency alone is able to explain about 60% of the variability in vegetation carbon fluxes and pools for most of the investigated climate conditions. Processes related to light were also found important together with parameters affecting plant structure (growth, establishment and mortality). The model shows minor sensitivity to hydrological and soil texture parameters, questioning its skills in representing spatial vegetation heterogeneity at regional or watershed scales. We conclude that LPJ-GUESS' structure and possibly the one of other, structurally similar, dynamic vegetation models may need to be reconsidered. Specifically, the oversensitivity of the photosynthetic component deserves a particular attention, as this seems to contradict an increasing number of observations suggesting that photosynthesis may be a consequence rather than the driver of plant growth.
Menking, K. M.; Anderson, R. Y.; Syed, K. H.; Shafike, N. G.
2002-12-01
The climatic conditions leading to highstands of "pluvial" Lake Estancia in central New Mexico have been a matter of considerable debate, resulting in a wide range of estimates for Pleistocene precipitation and temperature in the southwestern United States. Using a simple hydrologic balance approach, Leopold (1951) calculated that precipitation was 50% greater than modern based on the assumption that summer temperatures were 9 ° C colder while winter temperatures were unchanged. In contrast, Galloway (1970) called on temperature decreases of 10-11 ° C throughout the year and a reduction in mean annual precipitation of 14% to raise Lake Estancia to its highstand. In still another study, Brakenridge suggested that highstands could be achieved through no change in precipitation if monthly temperatures were reduced by 7-8 ° C. Experiments with 3 physically-based, continuous-time models to simulate surface runoff (USDA Soil and Water Assessment Tool), groundwater flow (MODFLOW with LAK2 package), and lake evaporation (lake energy balance model of Hostetler and Bartlein, 1990) indicate that none of these proposed full glacial climate scenarios could have produced a highstand lake. In particular, previous workers appear to have overestimated the reduction in evaporation rates associated with their proposed temperature changes, suggesting that using empirical relationships between modern air temperature and evaporation to predict late Pleistocene evaporation is problematic. Furthermore, model-determined reductions in lake evaporation are insufficient to allow for lake expansion as suggested by Galloway and Brakenridge. Even under Leopold's assumption that precipitation increased by 50%, modeled runoff appears to be insufficient to raise Lake Estancia more than a few meters above the lake floor.
Walker, A. P.; De Kauwe, M. G.; Medlyn, B. E.; Zaehle, S.; Luus, K. A.; Ryan, E.; Xia, J.; Norby, R. J.
2015-12-01
Plant photosynthetic rates increase and stomatal apertures decrease in response to elevated atmospheric CO[2] (eCO2), increasing both plant carbon (C) availability and water use efficiency. These physiological responses to eCO2 are well characterised and understood, however the ecological effects of these responses as they cascade through a suite of plant and ecosystem processes are complex and subject to multiple interactions and feedbacks. Therefore the response of the terrestrial carbon sink to increasing atmospheric CO[2] remains the largest uncertainty in global C cycle modelling to date, and is a huge contributor to uncertainty in climate change projections. Phase 2 of the FACE Model-Data Synthesis (FACE-MDS) project synthesises ecosystem observations from five long-term Free-Air CO[2] Enrichment (FACE) experiments and one open top chamber (OTC) experiment to evaluate the assumptions of a suite of terrestrial ecosystem models. The experiments are: The evergreen needleleaf Duke Forest FACE (NC), the deciduous broadleaf Oak Ridge FACE (TN), the prairie heating and FACE (WY), and the Nevada desert FACE, and the evergreen scrub oak OTC (FL). An assumption centered approach is being used to analyse: the interaction between eCO2 and water limitation on plant productivity; the interaction between eCO2 and temperature on plant productivity; whether increased rates of soil decomposition observed in many eCO2 experiments can account for model deficiencies in N uptake shown during Phase 1 of the FACE-MDS; and tracing carbon through the ecosystem to identify the exact cause of changes in ecosystem C storage.
Elshorbagy, Amin; Alam, Shahabul
2015-04-01
(AMPs) is presented. For constructing IDF-curves, only AMPs of different durations are needed. Strong correlation between the AMPs at the coarse-grid scale as output from GCMs and AMPs at the local finer scale is observed in many locations worldwide even though such a correlation may not exist between the corresponding time series of continuous precipitation records. The use of the GP technique, in particular its genetic symbolic regression variant, for downscaling the annual maximum precipitation is further expanded in two ways. First, the exploration and feature extraction capabilities of GP are utilized to develop both GCM-variant and GCM-invariant downscaling models/mathematical expressions. Second, the developed models as well as clustering methods and statistical tests are used to investigate a fundamental assumption of all statistical downscaling methods; that is the validity of the downscaling relationship developed based on a historical time period (e.g., 1960-1990) for the same task during future periods (e.g., up to year 2100). The proposed approach is applied to the case of constructing IDF curves for the City of Saskatoon, Canada. This study reveals that developing a downscaling relationship that is generic and GCM-invariant might lead to more reliable downscaling of future projections, even though the higher reliability comes at the cost of accuracy.
Estimating ETAS: the effects of truncation, missing data, and model assumptions
Seif, Stefanie; Mignan, Arnaud; Zechar, Jeremy; Werner, Maximilian; Wiemer, Stefan
2016-04-01
The Epidemic-Type Aftershock Sequence (ETAS) model is widely used to describe the occurrence of earthquakes in space and time, but there has been little discussion of the limits of, and influences on, its estimation. What has been established is that ETAS parameter estimates are influenced by missing data (e.g., earthquakes are not reliably detected during lively aftershock sequences) and by simplifying assumptions (e.g., that aftershocks are isotropically distributed). In this article, we investigate the effect of truncation: how do parameter estimates depend on the cut-off magnitude, Mcut, above which parameters are estimated? We analyze catalogs from southern California and Italy and find that parameter variations as a function of Mcut are caused by (i) changing sample size (which affects e.g. Omori's cconstant) or (ii) an intrinsic dependence on Mcut (as Mcut increases, absolute productivity and background rate decrease). We also explore the influence of another form of truncation - the finite catalog length - that can bias estimators of the branching ratio. Being also a function of Omori's p-value, the true branching ratio is underestimated by 45% to 5% for 1.05ETAS productivity parameters (α and K0) and the Omoris c-value are significantly changed only for low Mcut=2.5. We further find that conventional estimation errors for these parameters, inferred from simulations that do not account for aftershock incompleteness, are underestimated by, on average, a factor of six.
TENVERGERT, E; GILLESPIE, M; KINGMA, J
This paper shows how to use the log-linear subroutine of SPSS to fit the Rasch model. It also shows how to fit less restrictive models obtained by relaxing specific assumptions of the Rasch model. Conditional maximum likelihood estimation was achieved by including dummy variables for the total
TENVERGERT, E; GILLESPIE, M; KINGMA, J
1993-01-01
This paper shows how to use the log-linear subroutine of SPSS to fit the Rasch model. It also shows how to fit less restrictive models obtained by relaxing specific assumptions of the Rasch model. Conditional maximum likelihood estimation was achieved by including dummy variables for the total score
Directory of Open Access Journals (Sweden)
L. Meng
2012-07-01
Full Text Available Methane emissions from natural wetlands and rice paddies constitute a large proportion of atmospheric methane, but the magnitude and year-to-year variation of these methane sources are still unpredictable. Here we describe and evaluate the integration of a methane biogeochemical model (CLM4Me; Riley et al., 2011 into the Community Land Model 4.0 (CLM4CN in order to better explain spatial and temporal variations in methane emissions. We test new functions for soil pH and redox potential that impact microbial methane production in soils. We also constrain aerenchyma in plants in always-inundated areas in order to better represent wetland vegetation. Satellite inundated fraction is explicitly prescribed in the model, because there are large differences between simulated fractional inundation and satellite observations, and thus we do not use CLM4-simulated hydrology to predict inundated areas. A rice paddy module is also incorporated into the model, where the fraction of land used for rice production is explicitly prescribed. The model is evaluated at the site level with vegetation cover and water table prescribed from measurements. Explicit site level evaluations of simulated methane emissions are quite different than evaluating the grid-cell averaged emissions against available measurements. Using a baseline set of parameter values, our model-estimated average global wetland emissions for the period 1993–2004 were 256 Tg CH_{4} yr^{−1} (including the soil sink and rice paddy emissions in the year 2000 were 42 Tg CH_{4} yr^{−1}. Tropical wetlands contributed 201 Tg CH_{4} yr^{−1}, or 78% of the global wetland flux. Northern latitude (>50 N systems contributed 12 Tg CH_{4} yr^{−1}. However, sensitivity studies show a large range (150–346 Tg CH_{4} yr^{−1} in predicted global methane emissions (excluding emissions from rice paddies. The large range is
Directory of Open Access Journals (Sweden)
L. Meng
2011-06-01
Full Text Available Methane emissions from natural wetlands and rice paddies constitute a large proportion of atmospheric methane, but the magnitude and year-to-year variation of these methane sources is still unpredictable. Here we describe and evaluate the integration of a methane biogeochemical model (CLM4Me; Riley et al., 2011 into the Community Land Model 4.0 (CLM4CN in order to better explain spatial and temporal variations in methane emissions. We test new functions for soil pH and redox potential that impact microbial methane production in soils. We also constrain aerenchyma in plants in always-inundated areas in order to better represent wetland vegetation. Satellite inundated fraction is explicitly prescribed in the model because there are large differences between simulated fractional inundation and satellite observations. A rice paddy module is also incorporated into the model, where the fraction of land used for rice production is explicitly prescribed. The model is evaluated at the site level with vegetation cover and water table prescribed from measurements. Explicit site level evaluations of simulated methane emissions are quite different than evaluating the grid cell averaged emissions against available measurements. Using a baseline set of parameter values, our model-estimated average global wetland emissions for the period 1993–2004 were 256 Tg CH_{4} yr^{−1}, and rice paddy emissions in the year 2000 were 42 Tg CH_{4} yr^{−1}. Tropical wetlands contributed 201 Tg CH_{4} yr^{−1}, or 78 % of the global wetland flux. Northern latitude (>50 N systems contributed 12 Tg CH_{4} yr^{−1}. We expect this latter number may be an underestimate due to the low high-latitude inundated area captured by satellites and unrealistically low high-latitude productivity and soil carbon predicted by CLM4. Sensitivity analysis showed a large range (150–346 Tg CH_{4} yr^{−1} in
Fundamental Physics and Model Assumptions in Turbulent Combustion Models for Aerospace Propulsion
2014-06-01
Astronautics also speculate that, for non-equilibrium flows, this effect could be even stronger. Combustion problems wherein the energy deposition often...flamelet regime. However, in the presence of slow reactions such as pyrolysis and/or at high Reynolds numbers that lead to smaller turbulent scales...376404. 20S. Menon and N. Patel. Subgrid Modeling for Simulation of Spray Combustion in Large-Scale Combustors. AIAA Journal, 44(4):709–723, 2006. 21M
Henry, Kevin; Wood, Nathan J.; Frazier, Tim G.
2017-01-01
Tsunami evacuation planning in coastal communities is typically focused on local events where at-risk individuals must move on foot in a matter of minutes to safety. Less attention has been placed on distant tsunamis, where evacuations unfold over several hours, are often dominated by vehicle use and are managed by public safety officials. Traditional traffic simulation models focus on estimating clearance times but often overlook the influence of varying population demand, alternative modes, background traffic, shadow evacuation, and traffic management alternatives. These factors are especially important for island communities with limited egress options to safety. We use the coastal community of Balboa Island, California (USA), as a case study to explore the range of potential clearance times prior to wave arrival for a distant tsunami scenario. We use a first-in–first-out queuing simulation environment to estimate variations in clearance times, given varying assumptions of the evacuating population (demand) and the road network over which they evacuate (supply). Results suggest clearance times are less than wave arrival times for a distant tsunami, except when we assume maximum vehicle usage for residents, employees, and tourists for a weekend scenario. A two-lane bridge to the mainland was the primary traffic bottleneck, thereby minimizing the effect of departure times, shadow evacuations, background traffic, boat-based evacuations, and traffic light timing on overall community clearance time. Reducing vehicular demand generally reduced clearance time, whereas improvements to road capacity had mixed results. Finally, failure to recognize non-residential employee and tourist populations in the vehicle demand substantially underestimated clearance time.
Energy Technology Data Exchange (ETDEWEB)
Jakob, A
2004-07-01
In this report a comprehensive overview on the matrix diffusion of solutes in fractured crystalline rocks is presented. Some examples from observations in crystalline bedrock are used to illustrate that matrix diffusion indeed acts on various length scales. Fickian diffusion is discussed in detail followed by some considerations on rock porosity. Due to the fact that the dual-porosity medium model is a very common and versatile method for describing solute transport in fractured porous media, the transport equations and the fundamental assumptions, approximations and simplifications are discussed in detail. There is a variety of geometrical aspects, processes and events which could influence matrix diffusion. The most important of these, such as, e.g., the effect of the flow-wetted fracture surface, channelling and the limited extent of the porous rock for matrix diffusion etc., are addressed. In a further section open issues and unresolved problems related to matrix diffusion are mentioned. Since matrix diffusion is one of the key retarding processes in geosphere transport of dissolved radionuclide species, matrix diffusion was consequently taken into account in past performance assessments of radioactive waste repositories in crystalline host rocks. Some issues regarding matrix diffusion are site-specific while others are independent of the specific situation of a planned repository for radioactive wastes. Eight different performance assessments from Finland, Sweden and Switzerland were considered with the aim of finding out how matrix diffusion was addressed, and whether a consistent picture emerges regarding the varying methodology of the different radioactive waste organisations. In the final section of the report some conclusions are drawn and an outlook is given. An extensive bibliography provides the reader with the key papers and reports related to matrix diffusion. (author)
Approaching models of nursing from a postmodernist perspective.
Lister, P
1991-02-01
This paper explores some questions about the use of models of nursing. These questions make various assumptions about the nature of models of nursing, in general and in particular. Underlying these assumptions are various philosophical positions which are explored through an introduction to postmodernist approaches in philosophical criticism. To illustrate these approaches, a critique of the Roper et al. model is developed, and more general attitudes towards models of nursing are examined. It is suggested that postmodernism offers a challenge to many of the assumptions implicit in models of nursing, and that a greater awareness of these assumptions should lead to nursing care being better informed where such models are in use.
Nogawa, Tomoaki
2012-10-18
We examine the effectiveness of assuming an equal probability for states far from equilibrium. For this aim, we propose a method to construct a master equation for extensive variables describing nonstationary nonequilibrium dynamics. The key point of the method is the assumption that transient states are equivalent to the equilibrium state that has the same extensive variables, i.e., an equal probability holds for microscopic states in nonequilibrium. We demonstrate an application of this method to the critical relaxation of the two-dimensional Potts model by Monte Carlo simulations. While the one-variable description, which is adequate for equilibrium, yields relaxation dynamics that are very fast, the redundant two-variable description well reproduces the true dynamics quantitatively. These results suggest that some class of the nonequilibrium state can be described with a small extension of degrees of freedom, which may lead to an alternative way to understand nonequilibrium phenomena. © 2012 American Physical Society.
Whaley, Arthur L.; Noel, La Tonya
2013-01-01
The present study tested the model minority and inferior minority assumptions by examining the relationship between academic performance and measures of behavioral health in a subsample of 3,008 (22%) participants in a nationally representative, multicultural sample of 13,601 students in the 2001 Youth Risk Behavioral Survey, comparing Asian…
West, Stephen G.; Ryu, Ehri
2007-01-01
Nomothetic and idiographic approaches to research have long been in tension. John Nesselroade et al. have been at the forefront of a constructive rapprochement between these traditions. Heretofore their efforts have assumed a common measurement structure across persons. They have primarily focused on modeling relationships within persons, which…
Laínez, José M; Orcun, Seza; Pekny, Joseph F; Reklaitis, Gintaras V; Suvannasankha, Attaya; Fausel, Christopher; Anaissie, Elias J; Blau, Gary E
2014-01-01
Variable metabolism, dose-dependent efficacy, and a narrow therapeutic target of cyclophosphamide (CY) suggest that dosing based on individual pharmacokinetics (PK) will improve efficacy and minimize toxicity. Real-time individualized CY dose adjustment was previously explored using a maximum a posteriori (MAP) approach based on a five serum-PK sampling in patients with hematologic malignancy undergoing stem cell transplantation. The MAP approach resulted in an improved toxicity profile without sacrificing efficacy. However, extensive PK sampling is costly and not generally applicable in the clinic. We hypothesize that the assumption-free Bayesian approach (AFBA) can reduce sampling requirements, while improving the accuracy of results. Retrospective analysis of previously published CY PK data from 20 patients undergoing stem cell transplantation. In that study, Bayesian estimation based on the MAP approach of individual PK parameters was accomplished to predict individualized day-2 doses of CY. Based on these data, we used the AFBA to select the optimal sampling schedule and compare the projected probability of achieving the therapeutic end points. By optimizing the sampling schedule with the AFBA, an effective individualized PK characterization can be obtained with only two blood draws at 4 and 16 hours after administration on day 1. The second-day doses selected with the AFBA were significantly different than the MAP approach and averaged 37% higher probability of attaining the therapeutic targets. The AFBA, based on cutting-edge statistical and mathematical tools, allows an accurate individualized dosing of CY, with simplified PK sampling. This highly accessible approach holds great promise for improving efficacy, reducing toxicities, and lowering treatment costs. © 2013 Pharmacotherapy Publications, Inc.
Johnson, Benjamin T.; Petty, Grant W.; Skofronick-Jackson, Gail
2012-01-01
A simplied framework is presented for assessing the qualitative sensitivities of computed microwave properties, satellite brightness temperatures, and radar reflectivities to assumptions concerning the physical properties of ice-phase hydrometeors. Properties considered included the shape parameter of a gamma size distribution andthe melted-equivalent mass median diameter D0, the particle density, dielectric mixing formula, and the choice of complex index of refraction for ice. We examine these properties at selected radiometer frequencies of 18.7, 36.5, 89.0, and 150.0 GHz; and radar frequencies at 2.8, 13.4, 35.6, and 94.0 GHz consistent with existing and planned remote sensing instruments. Passive and active microwave observables of ice particles arefound to be extremely sensitive to the melted-equivalent mass median diameter D0 ofthe size distribution. Similar large sensitivities are found for variations in the ice vol-ume fraction whenever the geometric mass median diameter exceeds approximately 1/8th of the wavelength. At 94 GHz the two-way path integrated attenuation is potentially large for dense compact particles. The distribution parameter mu has a relatively weak effect on any observable: less than 1-2 K in brightness temperature and up to 2.7 dB difference in the effective radar reflectivity. Reversal of the roles of ice and air in the MaxwellGarnett dielectric mixing formula leads to a signicant change in both microwave brightness temperature (10 K) and radar reflectivity (2 dB). The choice of Warren (1984) or Warren and Brandt (2008) for the complex index of refraction of ice can produce a 3%-4% change in the brightness temperature depression.
Rubin, David C.; Berntsen, Dorthe; Bohni, Malene Klindt
2008-01-01
In the mnemonic model of posttraumatic stress disorder (PTSD), the current memory of a negative event, not the event itself, determines symptoms. The model is an alternative to the current event-based etiology of PTSD represented in the "Diagnostic and Statistical Manual of Mental Disorders" (4th ed., text rev.; American Psychiatric Association,…
Rubin, David C.; Berntsen, Dorthe; Bohni, Malene Klindt
2008-01-01
In the mnemonic model of posttraumatic stress disorder (PTSD), the current memory of a negative event, not the event itself, determines symptoms. The model is an alternative to the current event-based etiology of PTSD represented in the "Diagnostic and Statistical Manual of Mental Disorders" (4th ed., text rev.; American Psychiatric Association,…
CSIR Research Space (South Africa)
Casini, G
2012-10-01
Full Text Available possibilities for conceptual data modeling. It also raises the question of how existing conceptual models using ER, UML or ORM could be translated into Description Logics (DLs), a family of logics that have proved to be particularly appropriate for formalizing...
L. V. Nedorezov
2014-01-01
Stochastic model of migrations of individuals within the limits of finite domain on a plane is considered. It is assumed that population size scale is homogeneous, and there doesn't exist an interval of optimal values of population size (Alley effect doesn't realize for population). For every fixed value of population size number of interactions between individuals is calculated (as average in space and time). Correspondence between several classic models and numbers of interactions between i...
Directory of Open Access Journals (Sweden)
L.V. Nedorezov
2014-09-01
Full Text Available Stochastic model of migrations of individuals within the limits of finite domain on a plane is considered. It is assumed that population size scale is homogeneous, and there doesn't exist an interval of optimal values of population size (Alley effect doesn't realize for population. For every fixed value of population size number of interactions between individuals is calculated (as average in space and time. Correspondence between several classic models and numbers of interactions between individuals is analyzed.
Bazrgari, Babak; Nussbaum, Maury A; Madigan, Michael L
2012-01-01
The use of system identification to quantify trunk mechanical properties is growing in biomechanics research. The effects of several experimental and modelling factors involved in the system identification of trunk mechanical properties were investigated. Trunk kinematics and kinetics were measured in six individuals when exposed to sudden trunk perturbations. Effects of motion sensor positioning and properties of elements between the perturbing device and the trunk were investigated by adopting different models for system identification. Results showed that by measuring trunk kinematics at a location other than the trunk surface, the deformation of soft tissues is erroneously included into trunk kinematics and results in the trunk being predicted as a more damped structure. Results also showed that including elements between the trunk and the perturbing device in the system identification model did not substantially alter model predictions. Other important parameters that were found to substantially affect predictions were the cut-off frequency used when low-pass filtering raw data and the data window length used to estimate trunk properties.
Empirical Tests of the Assumptions Underlying Models for Foreign Exchange Rates.
1984-03-01
Martinengo (1980) extends a model by Dornbusch (1976) in which market equilibrium is formalized in terms of interest rates, level of prices, public...55-65. Dornbusch , R., "The Theory of Flexible Exchange Rate Regimes and Macroeconomic Policy", Scandinavian Journal of Economics, 78, 1976, pP. 255
Arons, Alexander M M; Krabbe, Paul F M
2013-01-01
Interest is rising in measuring subjective health outcomes, such as treatment outcomes that are not directly quantifiable (functional disability, symptoms, complaints, side effects and health-related quality of life). Health economists in particular have applied probabilistic choice models in the ar
Arons, A.M.M.; Krabbe, P.F.M.
2013-01-01
Interest is rising in measuring subjective health outcomes, such as treatment outcomes that are not directly quantifiable (functional disability, symptoms, complaints, side effects and health-related quality of life). Health economists in particular have applied probabilistic choice models in the ar
Oceanographic and behavioural assumptions in models of the fate of coral and coral reef fish larvae.
Wolanski, Eric; Kingsford, Michael J
2014-09-06
A predictive model of the fate of coral reef fish larvae in a reef system is proposed that combines the oceanographic processes of advection and turbulent diffusion with the biological process of horizontal swimming controlled by olfactory and auditory cues within the timescales of larval development. In the model, auditory cues resulted in swimming towards the reefs when within hearing distance of the reef, whereas olfactory cues resulted in the larvae swimming towards the natal reef in open waters by swimming against the concentration gradients in the smell plume emanating from the natal reef. The model suggested that the self-seeding rate may be quite large, at least 20% for the larvae of rapidly developing reef fish species, which contrasted with a self-seeding rate less than 2% for non-swimming coral larvae. The predicted self-recruitment rate of reefs was sensitive to a number of parameters, such as the time at which the fish larvae reach post-flexion, the pelagic larval duration of the larvae, the horizontal turbulent diffusion coefficient in reefal waters and the horizontal swimming behaviour of the fish larvae in response to auditory and olfactory cues, for which better field data are needed. Thus, the model suggested that high self-seeding rates for reef fish are possible, even in areas where the 'sticky water' effect is minimal and in the absence of long-term trapping in oceanic fronts and/or large-scale oceanic eddies or filaments that are often argued to facilitate the return of the larvae after long periods of drifting at sea. © 2014 The Author(s) Published by the Royal Society. All rights reserved.
Assessing Model Assumptions for Turbulent Premixed Combustion at High Karlovitz Number
2015-09-03
flames in the high-Karlovitz regime are characterized and modeled using Direct Numerical Simulations ( DNS ) with detailed chemistry. To enable the present...Simulations, detailed chemistry 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT UU 18. NUMBER OF PAGES 19a. NAME OF RESPONSIBLE PERSON...information. 15. SUBJECT TERMS. Key words or phrases identifying major concepts in the report. 16. SECURITY CLASSIFICATION. Enter security classification
Directory of Open Access Journals (Sweden)
Zulfiqar Ali Soomro
2014-12-01
Full Text Available Adhesion level control is very necessary to avoid slippage of rail wheelset and track from derailment for smoothing running of rail vehicle. In this paper the proper dynamics of wheelset for velocities acting in three dimensions of wheelset and rail track has been discussed along with creep forces on each wheel in longitudinal, lateral and spin directions has been enumerated and computed for suitable modeling. The concerned results have been simulated by Matlab code to observe the correlation of this phenomenon to compare creepage and creep forces for detecting adhesion level. This adhesion identification is recognized by applying coulomb’s law for sliding friction by comparing tangential and normal forces through co-efficient of friction
The stochastic quasi-steady-state assumption: Reducing the model but not the noise
Srivastava, Rishi; Haseltine, Eric L.; Mastny, Ethan; Rawlings, James B.
2011-04-01
Highly reactive species at small copy numbers play an important role in many biological reaction networks. We have described previously how these species can be removed from reaction networks using stochastic quasi-steady-state singular perturbation analysis (sQSPA). In this paper we apply sQSPA to three published biological models: the pap operon regulation, a biochemical oscillator, and an intracellular viral infection. These examples demonstrate three different potential benefits of sQSPA. First, rare state probabilities can be accurately estimated from simulation. Second, the method typically results in fewer and better scaled parameters that can be more readily estimated from experiments. Finally, the simulation time can be significantly reduced without sacrificing the accuracy of the solution.
Syed, Moin; Juang, Linda P
2014-04-01
The purpose of the present study was to test three fundamental theoretical propositions from Phinney's (1990) developmental model about the relations among ethnic identity, identity coherence, and psychological functioning: (a) ethnic identity is more strongly related to identity coherence for ethnic minorities than for Whites; (b) ethnic identity is more strongly related to psychological functioning for ethnic minorities than for Whites; and (c) identity coherence mediates the association between ethnic identity and psychological functioning for ethnic minorities, but not for Whites. These hypotheses were tested in three independent samples of ethnically diverse youth. In general, we found weak to moderate support for these three hypotheses, suggesting that the theoretically proposed differences in ethnic identity between ethnic minorities and Whites may not be supported by data. Implications for theory and measurement of ethnic identity are discussed.
Estimating ETAS: The effects of truncation, missing data, and model assumptions
Seif, Stefanie; Mignan, Arnaud; Zechar, Jeremy Douglas; Werner, Maximilian Jonas; Wiemer, Stefan
2017-01-01
The Epidemic-Type Aftershock Sequence (ETAS) model is widely used to describe the occurrence of earthquakes in space and time, but there has been little discussion dedicated to the limits of, and influences on, its estimation. Among the possible influences we emphasize in this article the effect of the cutoff magnitude, Mcut, above which parameters are estimated; the finite length of earthquake catalogs; and missing data (e.g., during lively aftershock sequences). We analyze catalogs from Southern California and Italy and find that some parameters vary as a function of Mcut due to changing sample size (which affects, e.g., Omori's c constant) or an intrinsic dependence on Mcut (as Mcut increases, absolute productivity and background rate decrease). We also explore the influence of another form of truncation—the finite catalog length—that can bias estimators of the branching ratio. Being also a function of Omori's p value, the true branching ratio is underestimated by 45% to 5% for 1.05 < p < 1.2. Finite sample size affects the variation of the branching ratio estimates. Moreover, we investigate the effect of missing aftershocks and find that the ETAS productivity parameters (α and K0) and the Omori's c and p values are significantly changed for Mcut < 3.5. We further find that conventional estimation errors for these parameters, inferred from simulations that do not account for aftershock incompleteness, are underestimated by, on average, a factor of 8.
Energy Technology Data Exchange (ETDEWEB)
FORSYTHE,JAMES C.; WENNER,CAREN A.
1999-11-03
The history of high consequence accidents is rich with events wherein the actions, or inaction, of humans was critical to the sequence of events preceding the accident. Moreover, it has been reported that human error may contribute to 80% of accidents, if not more (dougherty and Fragola, 1988). Within the safety community, this reality is widely recognized and there is a substantially greater awareness of the human contribution to system safety today than has ever existed in the past. Despite these facts, and some measurable reduction in accident rates, when accidents do occur, there is a common lament. No matter how hard we try, we continue to have accidents. Accompanying this lament, there is often bewilderment expressed in statements such as, ''There's no explanation for why he/she did what they did''. It is believed that these statements are a symptom of inadequacies in how they think about humans and their role within technological systems. In particular, while there has never been a greater awareness of human factors, conceptual models of human involvement in engineered systems are often incomplete and in some cases, inaccurate.
Institute of Scientific and Technical Information of China (English)
云天铨; 雷光龙
2003-01-01
Two kinds of mathematical expressions of stock price, one of which based on certain description is the solution of the simplest differential equation (S.D.E.) obtained by method similar to that used in solid mechanics, the other based on uncertain description (i. e., the statistic theory) is the assumption of Black-Scholes's model (A.B-S.M.) in which the density function of stock price obeys logarithmic normal distribution, can be shown to be completely the same under certain equivalence relation of coefficients. The range of the solution of S.D.E. has been shown to be suited only for normal cases (no profit, or lost profit news, etc.) of stock market, so the same range is suited for A. B-S. M. as well.
Energy Technology Data Exchange (ETDEWEB)
Menz, J. [Technische Univ. Freiburg (Germany). Inst. fuer Markscheidewesen und Geodaesie; Bian Shaofeng [Technical Univ. of Surveying and Mapping, Wuhan (China)
1998-10-01
The contribution shows that Hardy`s multisquare method leads to results that are similar in their structure to the predictions by collocation. On the basis of geostatistical model assumptions, equations for calculating the prediction error are presented, and the multisquare method is compared with the collocation method on this basis. Equivalences between collocation and kriging are gone into, and information is presented on how predictions can be improved in the Bayesian sense. [Deutsch] In der folgenden Arbeit soll zuerst gezeigt werden, dass die Multiquadratische Methode nach HARDY zu Vorhersagen fuehrt, die in ihrer Struktur den Vorhersagen durch Kollokation entsprechen. Unter geostatistischen Modellannahmen werden nach dem Fehlerfortpflanzungsgesetz Formeln fuer die Berechnung der Vorhersagefehler angegeben. Auf der Grundlage dieser Formeln wird die Multiquadratische Methode mit der Kollokation verglichen. Es wird auf die Aequivalenzen zwischen Kollokation und Kriging verwiesen und angegeben, wie sich die Vorhersagen im BAYESschen Sinne verbessern lassen. (orig./MSK)
Directory of Open Access Journals (Sweden)
Veronika Brandstetter
2015-10-01
Full Text Available In automation plants, technical processes must be conducted in a way that products, substances, or services are produced reliably, with sufficient quality and with minimal strain on resources. A key driver in conducting these processes is the automation plant’s control software, which controls the technical plant components and thereby affects the physical, chemical, and mechanical processes that take place in automation plants. To this end, the control software of an automation plant must adhere to strict process requirements arising from the technical processes, and from the physical plant design. Currently, the validation of the control software often starts late in the engineering process in many cases – once the automation plant is almost completely constructed. However, as widely acknowledged, the later the control software of the automation plant is validated, the higher the effort for correcting revealed defects is, which can lead to serious budget overruns and project delays. In this article we propose an approach that allows the early validation of automation control software against the technical plant processes and assumptions about the physical plant design by means of simulation. We demonstrate the application of our approach on the example of an actual plant project from the automation industry and present it’s technical implementation
van der Sluijs, Jeroen P.; Arjan Wardekker, J.
2015-04-01
In order to enable anticipation and proactive adaptation, local decision makers increasingly seek detailed foresight about regional and local impacts of climate change. To this end, the Netherlands Models and Data-Centre implemented a pilot chain of sequentially linked models to project local climate impacts on hydrology, agriculture and nature under different national climate scenarios for a small region in the east of the Netherlands named Baakse Beek. The chain of models sequentially linked in that pilot includes a (future) weather generator and models of respectively subsurface hydrogeology, ground water stocks and flows, soil chemistry, vegetation development, crop yield and nature quality. These models typically have mismatching time step sizes and grid cell sizes. The linking of these models unavoidably involves the making of model assumptions that can hardly be validated, such as those needed to bridge the mismatches in spatial and temporal scales. Here we present and apply a method for the systematic critical appraisal of model assumptions that seeks to identify and characterize the weakest assumptions in a model chain. The critical appraisal of assumptions presented in this paper has been carried out ex-post. For the case of the climate impact model chain for Baakse Beek, the three most problematic assumptions were found to be: land use and land management kept constant over time; model linking of (daily) ground water model output to the (yearly) vegetation model around the root zone; and aggregation of daily output of the soil hydrology model into yearly input of a so called ‘mineralization reduction factor’ (calculated from annual average soil pH and daily soil hydrology) in the soil chemistry model. Overall, the method for critical appraisal of model assumptions presented and tested in this paper yields a rich qualitative insight in model uncertainty and model quality. It promotes reflectivity and learning in the modelling community, and leads to
Approaches to Modeling of Recrystallization
Directory of Open Access Journals (Sweden)
Håkan Hallberg
2011-10-01
Full Text Available Control of the material microstructure in terms of the grain size is a key component in tailoring material properties of metals and alloys and in creating functionally graded materials. To exert this control, reliable and efficient modeling and simulation of the recrystallization process whereby the grain size evolves is vital. The present contribution is a review paper, summarizing the current status of various approaches to modeling grain refinement due to recrystallization. The underlying mechanisms of recrystallization are briefly recollected and different simulation methods are discussed. Analytical and empirical models, continuum mechanical models and discrete methods as well as phase field, vertex and level set models of recrystallization will be considered. Such numerical methods have been reviewed previously, but with the present focus on recrystallization modeling and with a rapidly increasing amount of related publications, an updated review is called for. Advantages and disadvantages of the different methods are discussed in terms of applicability, underlying assumptions, physical relevance, implementation issues and computational efficiency.
Cheng, S. J.; Thomas, R. Q.; Wilkening, J. V.; Curtis, P.; Sharkey, T. D.; Nadelhoffer, K. J.
2015-12-01
Estimates of global land CO2 uptake vary widely across Earth system models. This uncertainty around model estimates of land-atmosphere CO2 fluxes may result from differences in how models parameterize and scale photosynthesis from the leaf-to-global level. To test model assumptions about photosynthesis, we derive rates of maximum carboxylation (Vc,max), electron transport (J), and triose phosphate utilization (TPU) from in situ leaf measurements from a forest representative of the Great Lakes region. Leaf-level gas exchange measurements were collected across a temperature range from sun and shade leaves of canopy-dominant tree species typically grouped into the same plant functional type. We evaluate the influence of short-term increases in leaf temperature, nitrogen per leaf area (Narea), species, and leaf light environment on Vc,max, J, and TPU by testing contrasting model equations that isolate the influence of these factors on these rate-limiting steps in leaf photosynthesis. Results indicate that patterns in Vc,max are best explained by a model that includes temperature and Narea. However, J varied with species and leaf light environment in addition to temperature. TPU also varied with leaf light environment and possibly with temperature. These variations in J and TPU with species or between sun and shade leaves suggest that plant traits outside of Narea are needed to explain patterns in J and TPU. This study provides in situ evidence on how Vc,max, J, and TPU vary within a forest canopy and highlight how leaf responses to changes in climate, forest species composition, and canopy structure may alter forest CO2 uptake.
On testing the missing at random assumption
DEFF Research Database (Denmark)
Jaeger, Manfred
2006-01-01
Most approaches to learning from incomplete data are based on the assumption that unobserved values are missing at random (mar). While the mar assumption, as such, is not testable, it can become testable in the context of other distributional assumptions, e.g. the naive Bayes assumption. In this ......Most approaches to learning from incomplete data are based on the assumption that unobserved values are missing at random (mar). While the mar assumption, as such, is not testable, it can become testable in the context of other distributional assumptions, e.g. the naive Bayes assumption....... In this paper we investigate a method for testing the mar assumption in the presence of other distributional constraints. We present methods to (approximately) compute a test statistic consisting of the ratio of two profile likelihood functions. This requires the optimization of the likelihood under...
DEFF Research Database (Denmark)
Wiesen, S.; Fundamenski, W.; Wischmeier, M.
2011-01-01
A revised formulation of the perpendicular diffusive transport model in 2D multi-fluid edge codes is proposed. Based on theoretical predictions and experimental observations a dependence on collisionality is introduced into the transport model of EDGE2D–EIRENE. The impact on time-dependent JET ga...
Kouloumpi, Eleni; Vandenabeele, Peter; Lawson, Graham; Pavlidis, Vassilios; Moens, Luc
2007-08-13
A multi-method approach has been developed for the characterisation of the proteinaceous binding media, drying oils and pigments present in samples from the panel paintings of the Church of the Assumption in Cephalonia (Ionian Islands, Greece). The analytical protocol involved the use of scanning electron microscopy/energy dispersive X-ray analysis (SEM/EDX), Raman spectroscopy and gas chromatography. The identification of the pigments was achieved by SEM/EDX and Raman spectroscopy. The latter technique was also used for the detection of the binding media, while their characterisation was achieved by gas chromatographic analysis of ethyl chloroformate derivatives. The aim of this multi-method protocol was to obtain as much information as possible from the panel paintings of the Church of the Assumption, through non-destructive methods, before proceeding to gas chromatography. Little scientific information is available for the understanding of the construction technique and the materials used by the post-Byzantine artists and whatever is available comes mainly from artists' manuals. One of the aims of this paper is to provide a scientific background to the technology of the Ionian post-Byzantine icons.
Disastrous assumptions about community disasters
Energy Technology Data Exchange (ETDEWEB)
Dynes, R.R. [Univ. of Delaware, Newark, DE (United States). Disaster Research Center
1995-12-31
Planning for local community disasters is compounded with erroneous assumptions. Six problematic models are identified: agent facts, big accident, end of the world, media, command and control, administrative. Problematic assumptions in each of them are identified. A more adequate model centered on problem solving is identified. That there is a discrepancy between disaster planning efforts and the actual response experience seems rather universal. That discrepancy is symbolized by the graffiti which predictably surfaces on many walls in post disaster locations -- ``First the earthquake, then the disaster.`` That contradiction is seldom reduced as a result of post disaster critiques, since the most usual conclusion is that the plan was adequate but the ``people`` did not follow it. Another explanation will be provided here. A more plausible explanation for failure is that most planning efforts adopt a number of erroneous assumptions which affect the outcome. Those assumptions are infrequently changed or modified by experience.
Fowler, Laura D.; Randall, David A.
1996-03-01
The inclusion of cloud microphysical processes in general circulation models makes it possible to study the multiple interactions among clouds, the hydrological cycle, and radiation. The gaps between the temporal and spatial scales at which such cloud microphysical processes work and those at which general circulation models presently function force climate modelers to crudely parameterize and simplify the various interactions among the different water species (namely, water vapor, cloud water, cloud ice, rain, and snow) and to use adjustable parameters to which large-scale models can be highly sensitive. Accordingly, the authors have investigated the sensitivity of the climate, simulated with the Colorado State University general circulation model, to various aspects of the parameterization of cloud microphysical processes and its interactions with the cumulus convection and radiative transfer parameterizations.The results of 120-day sensitivity experiments corresponding to perpetual January conditions have been compared with those of a control simulation in order to 1 ) determine the importance of advecting cloud water, cloud ice, rain, and snow at the temporal and spatial scale resolutions presently used in the model; 2) study the importance of the formation of extended stratiform anvils at the tops of cumulus towers, 3) analyze the role of mixed-phase clouds in determining the partitioning among cloud water, cloud ice, rain, and snow and, hence, their impacts on the simulated cloud optical properties; 4) evaluate the sensitivity of the atmospheric moisture budget and precipitation rates to a change in the fall velocities of rain and snow; 5) determine the model's sensitivity to the prescribed thresholds of autoconversion of cloud water to rain and cloud ice to snow; and 6) study the impact of the collection of supercooled cloud water by snow, as well as accounting for the cloud optical properties of snow.Results are presented in terms of 30-day mean differences
Environment Assumptions for Synthesis
Chatterjee, Krishnendu; Jobstmann, Barbara
2008-01-01
The synthesis problem asks to construct a reactive finite-state system from an $\\omega$-regular specification. Initial specifications are often unrealizable, which means that there is no system that implements the specification. A common reason for unrealizability is that assumptions on the environment of the system are incomplete. We study the problem of correcting an unrealizable specification $\\phi$ by computing an environment assumption $\\psi$ such that the new specification $\\psi\\to\\phi$ is realizable. Our aim is to construct an assumption $\\psi$ that constrains only the environment and is as weak as possible. We present a two-step algorithm for computing assumptions. The algorithm operates on the game graph that is used to answer the realizability question. First, we compute a safety assumption that removes a minimal set of environment edges from the graph. Second, we compute a liveness assumption that puts fairness conditions on some of the remaining environment edges. We show that the problem of findi...
Camp, Richard J.; Pratt, Thane K.; Gorresen, P. Marcos; Woodworth, Bethany L.; Jeffrey, John J.
2014-01-01
Freed and Cann (2013) criticized our use of linear models to assess trends in the status of Hawaiian forest birds through time (Camp et al. 2009a, 2009b, 2010) by questioning our sampling scheme, whether we met model assumptions, and whether we ignored short-term changes in the population time series. In the present paper, we address these concerns and reiterate that our results do not support the position of Freed and Cann (2013) that the forest birds in the Hakalau Forest National Wildlife Refuge (NWR) are declining, or that the federally listed endangered birds are showing signs of imminent collapse. On the contrary, our data indicate that the 21-year long-term trends for native birds in Hakalau Forest NWR are stable to increasing, especially in areas that have received active management.
Linking assumptions in amblyopia
LEVI, DENNIS M.
2017-01-01
Over the last 35 years or so, there has been substantial progress in revealing and characterizing the many interesting and sometimes mysterious sensory abnormalities that accompany amblyopia. A goal of many of the studies has been to try to make the link between the sensory losses and the underlying neural losses, resulting in several hypotheses about the site, nature, and cause of amblyopia. This article reviews some of these hypotheses, and the assumptions that link the sensory losses to specific physiological alterations in the brain. Despite intensive study, it turns out to be quite difficult to make a simple linking hypothesis, at least at the level of single neurons, and the locus of the sensory loss remains elusive. It is now clear that the simplest notion—that reduced contrast sensitivity of neurons in cortical area V1 explains the reduction in contrast sensitivity—is too simplistic. Considerations of noise, noise correlations, pooling, and the weighting of information also play a critically important role in making perceptual decisions, and our current models of amblyopia do not adequately take these into account. Indeed, although the reduction of contrast sensitivity is generally considered to reflect “early” neural changes, it seems plausible that it reflects changes at many stages of visual processing. PMID:23879956
Test of Poisson Failure Assumption.
1982-09-01
o. ....... 37 00/ D itlr.: DVI r TEST OF POISSON FAILURE ASSUMPTION Chapter 1. INTRODUCTION 1.1 Background. In stockage models... precipitates a regular failure pattern; it is also possible that the coding of scheduled vs unscheduled does not reflect what we would expect. Data
Campos, Jose Alejandro Gonzalez; Moraga, Paulina Saavedra; Del Pozo, Manuel Freire
2013-01-01
This paper introduces the generalized beta (GB) model as a new modeling tool in the educational assessment area and evaluation analysis, specifically. Unlike normal model, GB model allows us to capture some real characteristics of data and it is an important tool for understanding the phenomenon of learning. This paper develops a contrast with the…
Greenland, Sander; Mansournia, Mohammad Ali
2015-10-01
We describe how ordinary interpretations of causal models and causal graphs fail to capture important distinctions among ignorable allocation mechanisms for subject selection or allocation. We illustrate these limitations in the case of random confounding and designs that prevent such confounding. In many experimental designs individual treatment allocations are dependent, and explicit population models are needed to show this dependency. In particular, certain designs impose unfaithful covariate-treatment distributions to prevent random confounding, yet ordinary causal graphs cannot discriminate between these unconfounded designs and confounded studies. Causal models for populations are better suited for displaying these phenomena than are individual-level models, because they allow representation of allocation dependencies as well as outcome dependencies across individuals. Nonetheless, even with this extension, ordinary graphical models still fail to capture distinctions between hypothetical superpopulations (sampling distributions) and observed populations (actual distributions), although potential-outcome models can be adapted to show these distinctions and their consequences.
Wrong assumptions in the financial crisis
Aalbers, M.B.
2009-01-01
Purpose - The purpose of this paper is to show how some of the assumptions about the current financial crisis are wrong because they misunderstand what takes place in the mortgage market. Design/methodology/approach - The paper discusses four wrong assumptions: one related to regulation, one to leve
Quantum Machine and SR Approach: a Unified Model
Garola, C; Sozzo, S; Garola, Claudio; Pykacz, Jaroslav; Sozzo, Sandro
2005-01-01
The Geneva-Brussels approach to quantum mechanics (QM) and the semantic realism (SR) nonstandard interpretation of QM exhibit some common features and some deep conceptual differences. We discuss in this paper two elementary models provided in the two approaches as intuitive supports to general reasonings and as a proof of consistency of general assumptions, and show that Aerts' quantum machine can be embodied into a macroscopic version of the microscopic SR model, overcoming the seeming incompatibility between the two models. This result provides some hints for the construction of a unified perspective in which the two approaches can be properly placed.
Modelling N_{2}O dynamics in the engineered N cycle: Observations, assumptions, knowns, and unknowns
DEFF Research Database (Denmark)
Smets, Barth F.; Pellicer i Nàcher, Carles; Jensen, Marlene Mark;
of the main microbial processes responsible for its production and consumption. The conceptualization of these pathways in mathematical models has the potential to become a key tool to increase our understanding on the complex interrelationships within these ecosystems and develop strategies to minimize...... the carbon footprint of wastewater treatment plants. Unfortunately, existing model structures are limited to describe the emissions of individual microbial pathways in an attempt to decrease their complexity and facilitate their calibration. The present contribution summarizes the recent developments...
On testing the missing at random assumption
DEFF Research Database (Denmark)
Jaeger, Manfred
2006-01-01
Most approaches to learning from incomplete data are based on the assumption that unobserved values are missing at random (mar). While the mar assumption, as such, is not testable, it can become testable in the context of other distributional assumptions, e.g. the naive Bayes assumption....... In this paper we investigate a method for testing the mar assumption in the presence of other distributional constraints. We present methods to (approximately) compute a test statistic consisting of the ratio of two profile likelihood functions. This requires the optimization of the likelihood under...... no assumptionson the missingness mechanism, for which we use our recently proposed AI \\& M algorithm. We present experimental results on synthetic data that show that our approximate test statistic is a good indicator for whether data is mar relative to the given distributional assumptions....
The shell model approach: Key to hadron structure
Energy Technology Data Exchange (ETDEWEB)
Lipkin, H.J. (Weizmann Inst. of Science, Rehovoth (Israel). Dept. of Nuclear Physics)
1989-08-14
A shell model approach leads to a simple constituent quark model for hadron structure in which mesons and baryons consist only of constituent quarks. Hadron masses are the sums of the constituent quark effective masses and a hyperfine interaction inversely proportional to the product of these same masses. Hadron masses and magnetic moments are related by the assumption that the same effective mass parameter appears in the additive mass term, the hyperfine interaction, and the quark magnetic moment, both in mesons and baryons. The analysis pinpoints the physical assumptions needed for each relation and gives two new mass relations. Application to weak decays and recent polarized EMC data confirms conclusions previously obtained that the current quark contribution to the spin structure of the proton vanishes, but without need for the questionable assumption of SU(3) symmetry relating hyperon decays and proton structure. SU(3) symmetry breaking is clarified. 24 refs.
2016-01-01
Reports an error in "A violation of the conditional independence assumption in the two-high-threshold model of recognition memory" by Tina Chen, Jeffrey J. Starns and Caren M. Rotello (Journal of Experimental Psychology: Learning, Memory, and Cognition, 2015[Jul], Vol 41[4], 1215-1222). In the article, Chen et al. compared three models: a continuous signal detection model (SDT), a standard two-high-threshold discrete-state model in which detect states always led to correct responses (2HT), and a full-mapping version of the 2HT model in which detect states could lead to either correct or incorrect responses. After publication, Rani Moran (personal communication, April 21, 2015) identified two errors that impact the reported fit statistics for the Bayesian information criterion (BIC) metric of all models as well as the Akaike information criterion (AIC) results for the full-mapping model. The errors are described in the erratum. (The following abstract of the original article appeared in record 2014-56216-001.) The 2-high-threshold (2HT) model of recognition memory assumes that test items result in distinct internal states: they are either detected or not, and the probability of responding at a particular confidence level that an item is "old" or "new" depends on the state-response mapping parameters. The mapping parameters are independent of the probability that an item yields a particular state (e.g., both strong and weak items that are detected as old have the same probability of producing a highest-confidence "old" response). We tested this conditional independence assumption by presenting nouns 1, 2, or 4 times. To maximize the strength of some items, "superstrong" items were repeated 4 times and encoded in conjunction with pleasantness, imageability, anagram, and survival processing tasks. The 2HT model failed to simultaneously capture the response rate data for all item classes, demonstrating that the data violated the conditional independence assumption. In
Zigarmi, Drea; Roberts, Taylor Peyton
2017-01-01
Purpose: This study aims to test the following three assertions underlying the Situational Leadership® II (SLII) Model: all four leadership styles are received by followers; all four leadership styles are needed by followers; and if there is a fit between the leadership style a follower receives and needs, that follower will demonstrate favorable…
Scientific Theories, Models and the Semantic Approach
Directory of Open Access Journals (Sweden)
Décio Krause
2007-12-01
Full Text Available According to the semantic view, a theory is characterized by a class of models. In this paper, we examine critically some of the assumptions that underlie this approach. First, we recall that models are models of something. Thus we cannot leave completely aside the axiomatization of the theories under consideration, nor can we ignore the metamathematics used to elaborate these models, for changes in the metamathematics often impose restrictions on the resulting models. Second, based on a parallel between van Fraassen’s modal interpretation of quantum mechanics and Skolem’s relativism regarding set-theoretic concepts, we introduce a distinction between relative and absolute concepts in the context of the models of a scientific theory. And we discuss the significance of that distinction. Finally, by focusing on contemporary particle physics, we raise the question: since there is no general accepted unification of the parts of the standard model (namely, QED and QCD, we have no theory, in the usual sense of the term. This poses a difficulty: if there is no theory, how can we speak of its models? What are the latter models of? We conclude by noting that it is unclear that the semantic view can be applied to contemporary physical theories.
Whitehead, Alfred J; Vesperini, Enrico; Zwart, Simon Portegies
2013-01-01
We perform a series of simulations of evolving star clusters using AMUSE (the Astrophysical Multipurpose Software Environment), a new community-based multi-physics simulation package, and compare our results to existing work. These simulations model a star cluster beginning with a King model distribution and a selection of power-law initial mass functions, and contain a tidal cut-off. They are evolved using collisional stellar dynamics and include mass loss due to stellar evolution. After determining that the differences between AMUSE results and prior publications are understood, we explored the variation in cluster lifetimes due to the random realization noise introduced by transforming a King model to specific initial conditions. This random realization noise can affect the lifetime of a simulated star cluster by up to 30%. Two modes of star cluster dissolution were identified: a mass evolution curve that contains a run-away cluster dissolution with a sudden loss of mass, and a dissolution mode that does n...
Kale, David; Stork, David G.
2009-02-01
The problems of estimating the position of an illuminant and the direction of illumination in realist paintings have been addressed using algorithms from computer vision. These algorithms fall into two general categories: In model-independent methods (cast-shadow analysis, occluding-contour analysis, ...), one does not need to know or assume the three-dimensional shapes of the objects in the scene. In model-dependent methods (shape-fromshading, full computer graphics synthesis, ...), one does need to know or assume the three-dimensional shapes. We explore the intermediate- or weak-model condition, where the three-dimensional object rendered is so simple one can very confidently assume its three-dimensional shape and, further, that this shape admits an analytic derivation of the appearance model. Specifically, we can assume that floors and walls are flat and that they are horizontal and vertical, respectively. We derived the maximum-likelihood estimator for the two-dimensional spatial location of a point source in an image as a function of the pattern of brightness (or grayscale value) over such a planar surface. We applied our methods to two paintings of the Baroque, paintings for which the question of the illuminant position is of interest to art historians: Georges de la Tour's Christ in the carpenter's studio (1645) and Caravaggio's The calling of St. Matthew (1599-1600). Our analyses show that a single point source (somewhat near to the depicted candle) is a slightly better explanation of the pattern of brightness on the floor in Christ than are two point sources, one in place of each of the figures. The luminance pattern on the rear wall in The calling implies the source is local, a few meters outside the picture frame-not the infinitely distant sun. Both results are consistent with previous rebuttals of the recent art historical claim that these paintings were executed by means of tracing optically projected images. Our method is the first application of such
A Cmparison of Closed World Assumptions
Institute of Scientific and Technical Information of China (English)
沈一栋
1992-01-01
In this Paper.we introduce a notion of the family of closed world assumptions and compare several well-known closed world approaches in the family to the extent to whic an incomplete database is com pleted.
Kingdom, Frederick A A; Baldwin, Alex S; Schmidtmann, Gunnar
2015-01-01
Many studies have investigated how multiple stimuli combine to reach threshold. There are broadly speaking two ways this can occur: additive summation (AS) where inputs from the different stimuli add together in a single mechanism, or probability summation (PS) where different stimuli are detected independently by separate mechanisms. PS is traditionally modeled under high threshold theory (HTT); however, tests have shown that HTT is incorrect and that signal detection theory (SDT) is the better framework for modeling summation. Modeling the equivalent of PS under SDT is, however, relatively complicated, leading many investigators to use Monte Carlo simulations for the predictions. We derive formulas that employ numerical integration to predict the proportion correct for detecting multiple stimuli assuming PS under SDT, for the situations in which stimuli are either equal or unequal in strength. Both formulas are general purpose, calculating performance for forced-choice tasks with M alternatives, n stimuli, in Q monitored mechanisms, each subject to a non-linear transducer with exponent τ. We show how the probability (and additive) summation formulas can be used to simulate psychometric functions, which when fitted with Weibull functions make signature predictions for how thresholds and psychometric function slopes vary as a function of τ, n, and Q. We also show how one can fit the formulas directly to real psychometric functions using data from a binocular summation experiment, and show how one can obtain estimates of τ and test whether binocular summation conforms more to PS or AS. The methods described here can be readily applied using software functions newly added to the Palamedes toolbox.
Kemaneci, Efe; Graef, Wouter; van Dijk, Jan; Kroesen, Gerrit M W
2015-01-01
Collisional and radiative dynamics of a plasma is exposed by so-called Collisional Radiative Models [1] that simplify the chemical kinetics by quasi-steady state assignment on certain types of particles. The assignment is conventionally based on the classification of the plasma species by the ratio of the transport to the local destruction frequencies. We show that the classification is not exact due to the role of the time-dependent local production, and a measure is necessary to confirm the validity of the assignment. The main goal of this study is to evaluate a measure on the quasi-steady state assumptions of these models. Inspired by a chemical reduction technique called Intrinsic Low Dimensional Manifolds [2, 3], an estimate local source is provided at the transport time-scale. This source is a deviation from the quasi-steady state for the particle and its value is assigned as an error of the quasi-steady state assumption. The propagation of this error on the derived quantities is formulated in the Colli...
Özer, Ahmet Özkan
2016-04-01
An infinite dimensional model for a three-layer active constrained layer (ACL) beam model, consisting of a piezoelectric elastic layer at the top and an elastic host layer at the bottom constraining a viscoelastic layer in the middle, is obtained for clamped-free boundary conditions by using a thorough variational approach. The Rao-Nakra thin compliant layer approximation is adopted to model the sandwich structure, and the electrostatic approach (magnetic effects are ignored) is assumed for the piezoelectric layer. Instead of the voltage actuation of the piezoelectric layer, the piezoelectric layer is proposed to be activated by a charge (or current) source. We show that, the closed-loop system with all mechanical feedback is shown to be uniformly exponentially stable. Our result is the outcome of the compact perturbation argument and a unique continuation result for the spectral problem which relies on the multipliers method. Finally, the modeling methodology of the paper is generalized to the multilayer ACL beams, and the uniform exponential stabilizability result is established analogously.
Examining Computational Assumptions For Godiva IV
Energy Technology Data Exchange (ETDEWEB)
Kirkland, Alexander Matthew [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Jaegers, Peter James [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2016-08-11
Over the course of summer 2016, the effects of several computational modeling assumptions with respect to the Godiva IV reactor were examined. The majority of these assumptions pertained to modeling errors existing in the control rods and burst rod. The Monte Carlo neutron transport code, MCNP, was used to investigate these modeling changes, primarily by comparing them to that of the original input deck specifications.
DEFF Research Database (Denmark)
Löwe, Roland; Del Giudice, Dario; Mikkelsen, Peter Steen
to observations. After a brief discussion of the assumptions made for likelihood-based parameter inference, we illustrated the basic principles of both approaches on the example of sewer flow modelling with a conceptual rainfallrunoff model. The results from a real-world case study suggested that both approaches...
2007-11-02
results. Concepts like microbial growth in the form of a biofilm and spatially varying contaminant concentrations bring the validity of the CSTR ...assumption into question. These concepts are incorporated into the different modeling approaches to evaluate the CSTR assumption. Model simulations show that
Testing Our Fundamental Assumptions
Kohler, Susanna
2016-06-01
Science is all about testing the things we take for granted including some of the most fundamental aspects of how we understand our universe. Is the speed of light in a vacuum the same for all photons regardless of their energy? Is the rest mass of a photon actually zero? A series of recent studies explore the possibility of using transient astrophysical sources for tests!Explaining Different Arrival TimesArtists illustration of a gamma-ray burst, another extragalactic transient, in a star-forming region. [NASA/Swift/Mary Pat Hrybyk-Keith and John Jones]Suppose you observe a distant transient astrophysical source like a gamma-ray burst, or a flare from an active nucleus and two photons of different energies arrive at your telescope at different times. This difference in arrival times could be due to several different factors, depending on how deeply you want to question some of our fundamental assumptions about physics:Intrinsic delayThe photons may simply have been emitted at two different times by the astrophysical source.Delay due to Lorentz invariance violationPerhaps the assumption that all massless particles (even two photons with different energies) move at the exact same velocity in a vacuum is incorrect.Special-relativistic delayMaybe there is a universal speed for massless particles, but the assumption that photons have zero rest mass is wrong. This, too, would cause photon velocities to be energy-dependent.Delay due to gravitational potentialPerhaps our understanding of the gravitational potential that the photons experience as they travel is incorrect, also causing different flight times for photons of different energies. This would mean that Einsteins equivalence principle, a fundamental tenet of general relativity (GR), is incorrect.If we now turn this problem around, then by measuring the arrival time delay between photons of different energies from various astrophysical sources the further away, the better we can provide constraints on these
Assumptions of Multiple Regression: Correcting Two Misconceptions
Directory of Open Access Journals (Sweden)
Matt N. Williams
2013-09-01
Full Text Available In 2002, an article entitled - Four assumptions of multiple regression that researchers should always test- by.Osborne and Waters was published in PARE. This article has gone on to be viewed more than 275,000 times.(as of August 2013, and it is one of the first results displayed in a Google search for - regression.assumptions- . While Osborne and Waters' efforts in raising awareness of the need to check assumptions.when using regression are laudable, we note that the original article contained at least two fairly important.misconceptions about the assumptions of multiple regression: Firstly, that multiple regression requires the.assumption of normally distributed variables; and secondly, that measurement errors necessarily cause.underestimation of simple regression coefficients. In this article, we clarify that multiple regression models.estimated using ordinary least squares require the assumption of normally distributed errors in order for.trustworthy inferences, at least in small samples, but not the assumption of normally distributed response or.predictor variables. Secondly, we point out that regression coefficients in simple regression models will be.biased (toward zero estimates of the relationships between variables of interest when measurement error is.uncorrelated across those variables, but that when correlated measurement error is present, regression.coefficients may be either upwardly or downwardly biased. We conclude with a brief corrected summary of.the assumptions of multiple regression when using ordinary least squares.
A Bayesian Shrinkage Approach for AMMI Models.
da Silva, Carlos Pereira; de Oliveira, Luciano Antonio; Nuvunga, Joel Jorge; Pamplona, Andrezza Kéllen Alves; Balestre, Marcio
2015-01-01
Linear-bilinear models, especially the additive main effects and multiplicative interaction (AMMI) model, are widely applicable to genotype-by-environment interaction (GEI) studies in plant breeding programs. These models allow a parsimonious modeling of GE interactions, retaining a small number of principal components in the analysis. However, one aspect of the AMMI model that is still debated is the selection criteria for determining the number of multiplicative terms required to describe the GE interaction pattern. Shrinkage estimators have been proposed as selection criteria for the GE interaction components. In this study, a Bayesian approach was combined with the AMMI model with shrinkage estimators for the principal components. A total of 55 maize genotypes were evaluated in nine different environments using a complete blocks design with three replicates. The results show that the traditional Bayesian AMMI model produces low shrinkage of singular values but avoids the usual pitfalls in determining the credible intervals in the biplot. On the other hand, Bayesian shrinkage AMMI models have difficulty with the credible interval for model parameters, but produce stronger shrinkage of the principal components, converging to GE matrices that have more shrinkage than those obtained using mixed models. This characteristic allowed more parsimonious models to be chosen, and resulted in models being selected that were similar to those obtained by the Cornelius F-test (α = 0.05) in traditional AMMI models and cross validation based on leave-one-out. This characteristic allowed more parsimonious models to be chosen and more GEI pattern retained on the first two components. The resulting model chosen by posterior distribution of singular value was also similar to those produced by the cross-validation approach in traditional AMMI models. Our method enables the estimation of credible interval for AMMI biplot plus the choice of AMMI model based on direct posterior
A Bayesian Shrinkage Approach for AMMI Models.
Directory of Open Access Journals (Sweden)
Carlos Pereira da Silva
Full Text Available Linear-bilinear models, especially the additive main effects and multiplicative interaction (AMMI model, are widely applicable to genotype-by-environment interaction (GEI studies in plant breeding programs. These models allow a parsimonious modeling of GE interactions, retaining a small number of principal components in the analysis. However, one aspect of the AMMI model that is still debated is the selection criteria for determining the number of multiplicative terms required to describe the GE interaction pattern. Shrinkage estimators have been proposed as selection criteria for the GE interaction components. In this study, a Bayesian approach was combined with the AMMI model with shrinkage estimators for the principal components. A total of 55 maize genotypes were evaluated in nine different environments using a complete blocks design with three replicates. The results show that the traditional Bayesian AMMI model produces low shrinkage of singular values but avoids the usual pitfalls in determining the credible intervals in the biplot. On the other hand, Bayesian shrinkage AMMI models have difficulty with the credible interval for model parameters, but produce stronger shrinkage of the principal components, converging to GE matrices that have more shrinkage than those obtained using mixed models. This characteristic allowed more parsimonious models to be chosen, and resulted in models being selected that were similar to those obtained by the Cornelius F-test (α = 0.05 in traditional AMMI models and cross validation based on leave-one-out. This characteristic allowed more parsimonious models to be chosen and more GEI pattern retained on the first two components. The resulting model chosen by posterior distribution of singular value was also similar to those produced by the cross-validation approach in traditional AMMI models. Our method enables the estimation of credible interval for AMMI biplot plus the choice of AMMI model based on direct
Directory of Open Access Journals (Sweden)
D. Simpson
2012-02-01
Full Text Available A new organic aerosol (OA module has been implemented into the EMEP chemical transport model. Four different volatility basis set (VBS schemes have been tested in long-term simulations for Europe, covering the six years 2002–2007. Different assumptions regarding partitioning of primary OA (POA and aging of POA and secondary OA (SOA, have been explored. Model results are compared to filter measurements, AMS-data and source-apportionment studies, as well as to other model studies. The present study indicates that many different sources contribute significantly to OA in Europe. Fossil POA and oxidised POA, biogenic and anthropogenic SOA (BSOA and ASOA, residential burning of biomass fuels and wildfire emissions may all contribute more than 10% each over substantial parts of Europe. Simple VBS based OA models can give reasonably good results for summer OA but more observational studies are needed to constrain the VBS parameterisations and to help improve emission inventories. The volatility distribution of primary emissions is an important issue for further work. This study shows smaller contributions from BSOA to OA in Europe than earlier work, but relatively greater ASOA. BVOC emissions are highly uncertain and need further validation. We can not reproduce winter levels of OA in Europe, and there are many indications that the present emission inventories substantially underestimate emissions from residential wood burning in large parts of Europe.
Multiple Model Approaches to Modelling and Control,
DEFF Research Database (Denmark)
Why Multiple Models?This book presents a variety of approaches which produce complex models or controllers by piecing together a number of simpler subsystems. Thisdivide-and-conquer strategy is a long-standing and general way of copingwith complexity in engineering systems, nature and human probl...
Datta, S.; Jones, W. L.; Ebrahimi, H.; Chen, R.; Payne, V.; Kroodsma, R.
2014-12-01
The first step in radiometric inter-calibration is to ascertain the self-consistency and reasonableness of the observed brightness temperature (Tb) for each individual sensor involved. One of the widely used approaches is to compare the observed Tb with a simulated Tb using a forward radiative transfer model (RTM) and input geophysical parameters at the geographic location and time of the observation. In this study we intend to test the sensitivity of the RTM to uncertainties in the input geophysical parameters as well as to the underlying physical assumptions of gaseous absorption and surface emission in the RTM. SAPHIR, a cross track scanner onboard Indo-French Megha-Tropique Satellite, gives us a unique opportunity of studying 6 dual band 183 GHz channels at an inclined orbit over the Tropics for the first time. We will also perform the same sensitivity analysis using the Advance Technology Microwave Sounder (ATMS) 23 GHz and five 183 GHz channels. Preliminary analysis comparing GDAS and an independent retrieved profile show some sensitivity of the RTM to the input data. An extended analysis of this work using different input geophysical parameters will be presented. Two different absorption models, the Rosenkranz and the MonoRTM will be tested to analyze the sensitivity of the RTM to spectroscopic assumptions in each model. Also for the 23.8 GHz channel, the sensitivity of the RTM to the surface emissivity model will be checked. Finally the impact of these sensitivities on radiometric inter-calibration of radiometers at sounding frequencies will be assessed.
Ramírez, M. A.; Schouwenaars, R.; Eyckens, P.; Gawad, J.; Kestens, L.; Van Bael, A.; Van Houtte, P.
2017-01-01
An essential step in the improvement of design strategies for a wide range of industrial deep drawing applications is the development of methods which allow for the precise prediction of shape and processing parameters. Earlier work has demonstrated, in a clear but qualitative manner, the capabilities of the hierarchical multiscale (HMS) model, which predicts the anisotropic plastic properties of metallic materials based on a statistical analysis of microstructure-based anisotropy and a continuous description of the yield locus. The method is implemented into the ABAQUS finite-element software but, until recently, little attention had been paid to other factors which determine the accuracy of a finite element prediction in general, such as mesh size, friction coefficient and rigid/elastic modelling of the tools. Through the analysis of cup drawing, which is a well-established laboratory-scale test relevant to industrial applications, a quantitative comparison is provided between measured cup geometry and punch force and modelling results for commercial AA6016T4 aluminium sheets. The relatively weak earing behaviour of these materials serves to emphasise the small differences still found between model and experiment, which may be addressed by future refinement of the micromechanical component of the HMS. Average cup height and punch force, which is an important process parameter omitted in earlier studies, depend primarily on the friction coefficient and assumptions in the modelling of the tools. Considering the balance between accuracy and precision, it is concluded that the proposed methodology has matured sufficiently to be used as a design tool at industrial level.
New Cryptosystem Using Multiple Cryptographic Assumptions
Directory of Open Access Journals (Sweden)
E. S. Ismail
2011-01-01
Full Text Available Problem statement: A cryptosystem is a way for a sender and a receiver to communicate digitally by which the sender can send receiver any confidential or private message by first encrypting it using the receivers public key. Upon receiving the encrypted message, the receiver can confirm the originality of the messages contents using his own secret key. Up to now, most of the existing cryptosystems were developed based on a single cryptographic assumption like factoring, discrete logarithms, quadratic residue or elliptic curve discrete logarithm. Although these schemes remain secure today, one day in a near future they may be broken if one finds a polynomial algorithm that can efficiently solve the underlying cryptographic assumption. Approach: By this motivation, we designed a new cryptosystem based on two cryptographic assumptions; quadratic residue and discrete logarithms. We integrated these two assumptions in our encrypting and decrypting equations so that the former depends on one public key whereas the latter depends on one corresponding secret key and two secret numbers. Each of public and secret keys in our scheme determines the assumptions we use. Results: The newly developed cryptosystem is shown secure against the three common considering algebraic attacks using a heuristic security technique. The efficiency performance of our scheme requires 2Texp+2Tmul +Thash time complexity for encryption and Texp+2Tmul+Tsrt time complexity for decryption and this magnitude of complexity is considered minimal for multiple cryptographic assumptions-like cryptosystems. Conclusion: The new cryptosystem based on multiple cryptographic assumptions offers a greater security level than that schemes based on a single cryptographic assumption. The adversary has to solve the two assumptions simultaneously to recover the original message from the received corresponding encrypted message but this is very unlikely to happen.
Multiple Model Approaches to Modelling and Control,
DEFF Research Database (Denmark)
on the ease with which prior knowledge can be incorporated. It is interesting to note that researchers in Control Theory, Neural Networks,Statistics, Artificial Intelligence and Fuzzy Logic have more or less independently developed very similar modelling methods, calling them Local ModelNetworks, Operating...... of introduction of existing knowledge, as well as the ease of model interpretation. This book attempts to outlinemuch of the common ground between the various approaches, encouraging the transfer of ideas.Recent progress in algorithms and analysis is presented, with constructive algorithms for automated model...
Model Construct Based Enterprise Model Architecture and Its Modeling Approach
Institute of Scientific and Technical Information of China (English)
无
2002-01-01
In order to support enterprise integration, a kind of model construct based enterprise model architecture and its modeling approach are studied in this paper. First, the structural makeup and internal relationships of enterprise model architecture are discussed. Then, the concept of reusable model construct (MC) which belongs to the control view and can help to derive other views is proposed. The modeling approach based on model construct consists of three steps, reference model architecture synthesis, enterprise model customization, system design and implementation. According to MC based modeling approach a case study with the background of one-kind-product machinery manufacturing enterprises is illustrated. It is shown that proposal model construct based enterprise model architecture and modeling approach are practical and efficient.
Directory of Open Access Journals (Sweden)
P. M. Shkapov
2015-01-01
Full Text Available The paper provides a mathematical model of thermo-gravity convection in a large volume vertical cylinder. The heat is removed from the product via the cooling jacket at the top of the cylinder. We suppose that a laminar fluid motion takes place. The model is based on the NavierStokes equation, the equation of heat transfer through the wall, and the heat transfer equation. The peculiarity of the process in large volume tanks was the distribution of the physical parameters of the coordinates that was taken into account when constructing the model. The model corresponds to a process of wort beer fermentation in the cylindrical-conical tanks (CCT. The CCT volume is divided into three zones and for each zone model equations was obtained. The first zone has an annular cross-section and it is limited to the height by the cooling jacket. In this zone the heat flow from the cooling jacket to the product is uppermost. Model equation of the first zone describes the process of heat transfer through the wall and is presented by linear inhomogeneous differential equation in partial derivatives that is solved analytically. For the second and third zones description there was a number of engineering assumptions. The fluid was considered Newtonian, viscous and incompressible. Convective motion considered in the Boussinesq approximation. The effect of viscous dissipation is not considered. The topology of fluid motion is similar to the cylindrical Poiseuille. The second zone model consists of the Navier-Stokes equations in cylindrical coordinates with the introduction of a simplified and the heat equation in the liquid layer. The volume that is occupied by an upward convective flow pertains to the third area. Convective flows do not mix and do not exchange heat. At the start of the process a medium has the same temperature and a zero initial velocity in the whole volume that allows us to specify the initial conditions for the process. The paper shows the
Energy Technology Data Exchange (ETDEWEB)
Painter, Scott L. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Environmental Sciences Division
2016-06-28
The Department of Energy’s Office of Environmental Management recently revised a Remedial Investigation/ Feasibility Study (RI/FS) that included an analysis of subsurface radionuclide transport at a potential new Environmental Management Disposal Facility (EMDF) in East Bear Creek Valley near Oak Ridge, Tennessee. The effect of three simplifying assumptions used in the RI/FS analyses are investigated using the same subsurface pathway conceptualization but with more flexible modeling tools. Neglect of vadose zone dispersion was found to be conservative or non-conservative, depending on the retarded travel time and the half-life. For a given equilibrium distribution coefficient, a relatively narrow range of half-life was identified for which neglect of vadose zone transport is non-conservative and radionuclide discharge into surface water is non-negligible. However, there are two additional conservative simplifications in the reference case that compensate for the non-conservative effect of neglecting vadose zone dispersion: the use of a steady infiltration rate and vadose zone velocity, and the way equilibrium sorption is used to represent transport in the fractured material of the saturated aquifer. With more realistic representations of all three processes, the RI/FS reference case was found to either provide a reasonably good approximation to the peak concentration or was significantly conservative (pessimistic) for all parameter combinations considered.
Multicomponent Equilibrium Models for Testing Geothermometry Approaches
Energy Technology Data Exchange (ETDEWEB)
Carl D. Palmer; Robert W. Smith; Travis L. McLing
2013-02-01
Geothermometry is an important tool for estimating deep reservoir temperature from the geochemical composition of shallower and cooler waters. The underlying assumption of geothermometry is that the waters collected from shallow wells and seeps maintain a chemical signature that reflects equilibrium in the deeper reservoir. Many of the geothermometers used in practice are based on correlation between water temperatures and composition or using thermodynamic calculations based a subset (typically silica, cations or cation ratios) of the dissolved constituents. An alternative approach is to use complete water compositions and equilibrium geochemical modeling to calculate the degree of disequilibrium (saturation index) for large number of potential reservoir minerals as a function of temperature. We have constructed several “forward” geochemical models using The Geochemist’s Workbench to simulate the change in chemical composition of reservoir fluids as they migrate toward the surface. These models explicitly account for the formation (mass and composition) of a steam phase and equilibrium partitioning of volatile components (e.g., CO2, H2S, and H2) into the steam as a result of pressure decreases associated with upward fluid migration from depth. We use the synthetic data generated from these simulations to determine the advantages and limitations of various geothermometry and optimization approaches for estimating the likely conditions (e.g., temperature, pCO2) to which the water was exposed in the deep subsurface. We demonstrate the magnitude of errors that can result from boiling, loss of volatiles, and analytical error from sampling and instrumental analysis. The estimated reservoir temperatures for these scenarios are also compared to conventional geothermometers. These results can help improve estimation of geothermal resource temperature during exploration and early development.
A semiparametric approach to physiological flow models.
Verotta, D; Sheiner, L B; Ebling, W F; Stanski, D R
1989-08-01
By regarding sampled tissues in a physiological model as linear subsystems, the usual advantages of flow models are preserved while mitigating two of their disadvantages, (i) the need for assumptions regarding intratissue kinetics, and (ii) the need to simultaneously fit data from several tissues. To apply the linear systems approach, both arterial blood and (interesting) tissue drug concentrations must be measured. The body is modeled as having an arterial compartment (A) distributing drug to different linear subsystems (tissues), connected in a specific way by blood flow. The response (CA, with dimensions of concentration) of A is measured. Tissues receive input from A (and optionally from other tissues), and send output to the outside or to other parts of the body. The response (CT, total amount of drug in the tissue (T) divided by the volume of T) from the T-th one, for example, of such tissues is also observed. From linear systems theory, CT can be expressed as the convolution of CA with a disposition function, F(t) (with dimensions 1/time). The function F(t) depends on the (unknown) structure of T, but has certain other constant properties: The integral integral infinity0 F(t) dt is the steady state ratio of CT to CA, and the point F(0) is the clearance rate of drug from A to T divided by the volume of T. A formula for the clearance rate of drug from T to outside T can be derived. To estimate F(t) empirically, and thus mitigate disadvantage (i), we suggest that, first, a nonparametric (or parametric) function be fitted to CA data yielding predicted values, CA, and, second, the convolution integral of CA with F(t) be fitted to CT data using a deconvolution method. By so doing, each tissue's data are analyzed separately, thus mitigating disadvantage (ii). A method for system simulation is also proposed. The results of applying the approach to simulated data and to real thiopental data are reported.
Hydraulic Modeling of Lock Approaches
2016-08-01
cation was that the guidewall design changed from a solid wall to one on pilings in which water was allowed to flow through and/or under the wall ...develops innovative solutions in civil and military engineering, geospatial sciences, water resources, and environmental sciences for the Army, the...magnitudes and directions at lock approaches for open river conditions. The meshes were developed using the Surface- water Modeling System. The two
LP Approach to Statistical Modeling
Mukhopadhyay, Subhadeep; Parzen, Emanuel
2014-01-01
We present an approach to statistical data modeling and exploratory data analysis called `LP Statistical Data Science.' It aims to generalize and unify traditional and novel statistical measures, methods, and exploratory tools. This article outlines fundamental concepts along with real-data examples to illustrate how the `LP Statistical Algorithm' can systematically tackle different varieties of data types, data patterns, and data structures under a coherent theoretical framework. A fundament...
Directory of Open Access Journals (Sweden)
Paulo Roberto Caldart
2014-12-01
Full Text Available The research presented herein has two objectives. First, this study will test whether actuarial assumptions for public sector pension schemes in Brazil adhere to reality and whether changing these assumptions might affect the results, particularly with respect to life tables and wage growth assumptions. The paper shows that the best fit life table is AT 2000 for males aggregated by one year, which involves a longer life expectancy than the life table proposed under current legislation (IBGE 2009. The data also show that actual wage growth was 4.59% per year from 2002 to 2012, as opposed to the 1% wage increase proposed by the same legislation. Changing these two assumptions increases the actuarial imbalance for a representative individual by 18.17% after accounting for the adjusted life table or by 98.30% after revising the wage growth assumption. With respect to its second objective, this paper proposes alternative funding mechanisms in which the local pension scheme will provide the funded component of the benefit that would be complemented by local government in a pay-as-you-go manner. The database utilized was for the state of Rio Grande do Sul in the month of November 2011. The results are thus restricted to Rio Grande do Sul.
Roy's specific life values and the philosophical assumption of humanism.
Hanna, Debra R
2013-01-01
Roy's philosophical assumption of humanism, which is shaped by the veritivity assumption, is considered in terms of her specific life values and in contrast to the contemporary view of humanism. Like veritivity, Roy's philosophical assumption of humanism unites a theocentric focus with anthropological values. Roy's perspective enriches the mainly secular, anthropocentric assumption. In this manuscript, the basis for Roy's perspective of humanism will be discussed so that readers will be able to use the Roy adaptation model in an authentic manner.
On a Markovian approach for modeling passive solar devices
Energy Technology Data Exchange (ETDEWEB)
Bottazzi, F.; Liebling, T.M. (Chaire de Recherche Operationelle, Ecole Polytechnique Federale de Lausanne (Switzerland)); Scartezzini, J.L.; Nygaard-Ferguson, M. (Lab. d' Energie Solaire et de Physique du Batiment, Ecole Polytechnique Federale de Lausanne (Switzerland))
1991-01-01
Stochastic models for the analysis of the energy and thermal comfort performances of passive solar devices have been increasingly studied for over a decade. A new approach to thermal building modeling, based on Markov chains, is proposed here to combine both the accuracy of traditional dynamic simulation with the practical advantages of simplified methods. A main difficulty of the Markovian approach is the discretization of the system variables. Efficient procedures have been developed to carry out this discretization and several numerical experiments have been performed to analyze the possibilities and limitations of the Markovian model. Despite its restrictive assumptions, it will be shown that accurate results are indeed obtained by this method. However, due to discretization, computer memory reqirements are more than inversely proportional to accuracy. (orig.).
Hybrid continuum-atomistic approach to model electrokinetics in nanofluidics
Energy Technology Data Exchange (ETDEWEB)
Amani, Ehsan, E-mail: eamani@aut.ac.ir; Movahed, Saeid, E-mail: smovahed@aut.ac.ir
2016-06-07
In this study, for the first time, a hybrid continuum-atomistic based model is proposed for electrokinetics, electroosmosis and electrophoresis, through nanochannels. Although continuum based methods are accurate enough to model fluid flow and electric potential in nanofluidics (in dimensions larger than 4 nm), ionic concentration is too low in nanochannels for the continuum assumption to be valid. On the other hand, the non-continuum based approaches are too time-consuming and therefore is limited to simple geometries, in practice. Here, to propose an efficient hybrid continuum-atomistic method of modelling the electrokinetics in nanochannels; the fluid flow and electric potential are computed based on continuum hypothesis coupled with an atomistic Lagrangian approach for the ionic transport. The results of the model are compared to and validated by the results of the molecular dynamics technique for a couple of case studies. Then, the influences of bulk ionic concentration, external electric field, size of nanochannel, and surface electric charge on the electrokinetic flow and ionic mass transfer are investigated, carefully. The hybrid continuum-atomistic method is a promising approach to model more complicated geometries and investigate more details of the electrokinetics in nanofluidics. - Highlights: • A hybrid continuum-atomistic model is proposed for electrokinetics in nanochannels. • The model is validated by molecular dynamics. • This is a promising approach to model more complicated geometries and physics.
Some Considerations on the Basic Assumptions in Rotordynamics
GENTA, G.; DELPRETE, C.; BRUSA, E.
1999-10-01
The dynamic study of rotors is usually performed under a number of assumptions, namely small displacements and rotations, small unbalance and constant angular velocity. The latter assumption can be substituted by a known time history of the spin speed. The present paper develops a general non-linear model which can be used to study the rotordynamic behaviour of both fixed and free rotors without resorting to the mentioned assumptions and compares the results obtained from a number of non-linear numerical simulations with those computed through the usual linearized approach. It is so possible to verify that the validity of the rotordynamic models extends to situations in which fairly large unbalances and whirling motions are present and, above all, it is shown that the doubts forwarded about the application of a model which is based on constant spin speed to the case of free rotors in which the angular momentum is constant have no ground. Rotordynamic models can thus be used to study the stability in the small of spinning spacecrafts and the insight obtained from the study of rotors is useful to understand their attitude dynamics and its interactions with the vibration dynamics.
Nielsen, Simon; Wilms, L Inge
2014-01-01
We examined the effects of normal aging on visual cognition in a sample of 112 healthy adults aged 60-75. A testbattery was designed to capture high-level measures of visual working memory and low-level measures of visuospatial attention and memory. To answer questions of how cognitive aging affects specific aspects of visual processing capacity, we used confirmatory factor analyses in Structural Equation Modeling (SEM; Model 2), informed by functional structures that were modeled with path analyses in SEM (Model 1). The results show that aging effects were selective to measures of visual processing speed compared to visual short-term memory (VSTM) capacity (Model 2). These results are consistent with some studies reporting selective aging effects on processing speed, and inconsistent with other studies reporting aging effects on both processing speed and VSTM capacity. In the discussion we argue that this discrepancy may be mediated by differences in age ranges, and variables of demography. The study demonstrates that SEM is a sensitive method to detect cognitive aging effects even within a narrow age-range, and a useful approach to structure the relationships between measured variables, and the cognitive functional foundation they supposedly represent.
Directory of Open Access Journals (Sweden)
Simon eNielsen
2015-01-01
Full Text Available We examined the effects of normal ageing on visual cognition in a sample of 112 healthy adults aged 60-75. A testbattery was designed to capture high-level measures of visual working memory and low-level measures of visuospatial attention and memory. To answer questions of how cognitive ageing affects specific aspects of visual processing capacity, we used confirmatory factor analyses in Structural Equation Modelling (SEM; Model 2, informed by functional structures that were modelled with path analyses in SEM (Model 1. The results show that ageing effects were selective to measures of visual processing speed compared to visual short-term memory (VSTM capacity (Model 2. These results are consistent with some studies reporting selective ageing effects on processing speed, and inconsistent with other studies reporting ageing effects on both processing speed and VSTM capacity. In the discussion we argue that this discrepancy may be mediated by differences in age ranges, and variables of demography. The study demonstrates that SEM is a sensitive method to detect cognitive ageing effects even within a narrow age-range, and a useful approach to structure the relationships between measured variables, and the cognitive functional foundation they supposedly represent.
Managerial and Organizational Assumptions in the CMM's
DEFF Research Database (Denmark)
2008-01-01
in different countries operating different economic and social models. Characterizing CMMI in this way opens the door to another question: are there other sets of organisational and management assumptions which would be better suited to other types of organisations operating in other cultural contexts?...
Life Support Baseline Values and Assumptions Document
Anderson, Molly S.; Ewert, Michael K.; Keener, John F.; Wagner, Sandra A.
2015-01-01
The Baseline Values and Assumptions Document (BVAD) provides analysts, modelers, and other life support researchers with a common set of values and assumptions which can be used as a baseline in their studies. This baseline, in turn, provides a common point of origin from which many studies in the community may depart, making research results easier to compare and providing researchers with reasonable values to assume for areas outside their experience. With the ability to accurately compare different technologies' performance for the same function, managers will be able to make better decisions regarding technology development.
Managerial and Organizational Assumptions in the CMM's
DEFF Research Database (Denmark)
2008-01-01
thinking about large production and manufacturing organisations (particularly in America) in the late industrial age. Many of the difficulties reported with CMMI can be attributed basing practice on these assumptions in organisations which have different cultures and management traditions, perhaps...... in different countries operating different economic and social models. Characterizing CMMI in this way opens the door to another question: are there other sets of organisational and management assumptions which would be better suited to other types of organisations operating in other cultural contexts?...
Sampling Assumptions in Inductive Generalization
Navarro, Daniel J.; Dry, Matthew J.; Lee, Michael D.
2012-01-01
Inductive generalization, where people go beyond the data provided, is a basic cognitive capability, and it underpins theoretical accounts of learning, categorization, and decision making. To complete the inductive leap needed for generalization, people must make a key "sampling" assumption about how the available data were generated.…
Metal Mixture Modeling Evaluation project: 2. Comparison of four modeling approaches
Farley, Kevin J.; Meyer, Joe; Balistrieri, Laurie S.; DeSchamphelaere, Karl; Iwasaki, Yuichi; Janssen, Colin; Kamo, Masashi; Lofts, Steve; Mebane, Christopher A.; Naito, Wataru; Ryan, Adam C.; Santore, Robert C.; Tipping, Edward
2015-01-01
As part of the Metal Mixture Modeling Evaluation (MMME) project, models were developed by the National Institute of Advanced Industrial Science and Technology (Japan), the U.S. Geological Survey (USA), HDR⎪HydroQual, Inc. (USA), and the Centre for Ecology and Hydrology (UK) to address the effects of metal mixtures on biological responses of aquatic organisms. A comparison of the 4 models, as they were presented at the MMME Workshop in Brussels, Belgium (May 2012), is provided herein. Overall, the models were found to be similar in structure (free ion activities computed by WHAM; specific or non-specific binding of metals/cations in or on the organism; specification of metal potency factors and/or toxicity response functions to relate metal accumulation to biological response). Major differences in modeling approaches are attributed to various modeling assumptions (e.g., single versus multiple types of binding site on the organism) and specific calibration strategies that affected the selection of model parameters. The models provided a reasonable description of additive (or nearly additive) toxicity for a number of individual toxicity test results. Less-than-additive toxicity was more difficult to describe with the available models. Because of limitations in the available datasets and the strong inter-relationships among the model parameters (log KM values, potency factors, toxicity response parameters), further evaluation of specific model assumptions and calibration strategies is needed.
Validation of Modeling Flow Approaching Navigation Locks
2013-08-01
instrumentation, direction vernier . ........................................................................ 8 Figure 11. Plan A lock approach, upstream approach...13-9 8 Figure 9. Tools and instrumentation, bracket attached to rail. Figure 10. Tools and instrumentation, direction vernier . Numerical model
A participatory modelling approach to developing a numerical sediment dynamics model
Jones, Nicholas; McEwen, Lindsey; Parker, Chris; Staddon, Chad
2016-04-01
Fluvial geomorphology is recognised as an important consideration in policy and legislation in the management of river catchments. Despite this recognition, limited knowledge exchange occurs between scientific researchers and river management practitioners. An example of this can be found within the limited uptake of numerical models of sediment dynamics by river management practitioners in the United Kingdom. The uptake of these models amongst the applied community is important as they have the potential to articulate how, at the catchment-scale, the impacts of management strategies of land-use change affect sediment dynamics and resulting channel quality. This paper describes and evaluates a new approach which involves river management stakeholders in an iterative and reflexive participatory modelling process. The aim of this approach was to create an environment for knowledge exchange between the stakeholders and the research team in the process of co-constructing a model. This process adopted a multiple case study approach, involving four groups of river catchment stakeholders in the United Kingdom. These stakeholder groups were involved in several stages of the participatory modelling process including: requirements analysis, model design, model development, and model evaluation. Stakeholders have provided input into a number of aspects of the modelling process, such as: data requirements, user interface, modelled processes, model assumptions, model applications, and model outputs. This paper will reflect on this process, in particular: the innovative methods used, data generated, and lessons learnt.
Model Mapping Approach Based on Ontology Semantics
Directory of Open Access Journals (Sweden)
Jinkui Hou
2013-09-01
Full Text Available The mapping relations between different models are the foundation for model transformation in model-driven software development. On the basis of ontology semantics, model mappings between different levels are classified by using structural semantics of modeling languages. The general definition process for mapping relations is explored, and the principles of structure mapping are proposed subsequently. The approach is further illustrated by the mapping relations from class model of object oriented modeling language to the C programming codes. The application research shows that the approach provides a theoretical guidance for the realization of model mapping, and thus can make an effective support to model-driven software development
Modelling hybrid stars in quark-hadron approaches
Energy Technology Data Exchange (ETDEWEB)
Schramm, S. [FIAS, Frankfurt am Main (Germany); Dexheimer, V. [Kent State University, Department of Physics, Kent, OH (United States); Negreiros, R. [Federal Fluminense University, Gragoata, Niteroi (Brazil)
2016-01-15
The density in the core of neutron stars can reach values of about 5 to 10 times nuclear matter saturation density. It is, therefore, a natural assumption that hadrons may have dissolved into quarks under such conditions, forming a hybrid star. This star will have an outer region of hadronic matter and a core of quark matter or even a mixed state of hadrons and quarks. In order to investigate such phases, we discuss different model approaches that can be used in the study of compact stars as well as being applicable to a wider range of temperatures and densities. One major model ingredient, the role of quark interactions in the stability of massive hybrid stars is discussed. In this context, possible conflicts with lattice QCD simulations are investigated. (orig.)
DEFF Research Database (Denmark)
Andreasen, Martin Møller; Christensen, Bent Jesper
This paper suggests a new and easy approach to estimate linear and non-linear dynamic term structure models with latent factors. We impose no distributional assumptions on the factors and they may therefore be non-Gaussian. The novelty of our approach is to use many observables (yields or bonds p...
Learning Action Models: Qualitative Approach
Bolander, T.; Gierasimczuk, N.; van der Hoek, W.; Holliday, W.H.; Wang, W.-F.
2015-01-01
In dynamic epistemic logic, actions are described using action models. In this paper we introduce a framework for studying learnability of action models from observations. We present first results concerning propositional action models. First we check two basic learnability criteria: finite
Modern Cosmology: Assumptions and Limits
Hwang, Jai-Chan
2012-06-01
Physical cosmology tries to understand the Universe at large with its origin and evolution. Observational and experimental situations in cosmology do not allow us to proceed purely based on the empirical means. We examine in which sense our cosmological assumptions in fact have shaped our current cosmological worldview with consequent inevitable limits. Cosmology, as other branches of science and knowledge, is a construct of human imagination reflecting the popular belief system of the era. The question at issue deserves further philosophic discussions. In Whitehead's words, ``philosophy, in one of its functions, is the critic of cosmologies.'' (Whitehead 1925).
Modern Cosmology: Assumptions and Limits
Hwang, Jai-chan
2012-01-01
Physical cosmology tries to understand the Universe at large with its origin and evolution. Observational and experimental situations in cosmology do not allow us to proceed purely based on the empirical means. We examine in which sense our cosmological assumptions in fact have shaped our current cosmological worldview with consequent inevitable limits. Cosmology, as other branches of science and knowledge, is a construct of human imagination reflecting the popular belief system of the era. The question at issue deserves further philosophic discussions. In Whitehead's words, "philosophy, in one of its functions, is the critic of cosmologies". (Whitehead 1925)
Learning Actions Models: Qualitative Approach
DEFF Research Database (Denmark)
Bolander, Thomas; Gierasimczuk, Nina
2015-01-01
—they are identifiable in the limit.We then move on to a particular learning method, which proceeds via restriction of a space of events within a learning-specific action model. This way of learning closely resembles the well-known update method from dynamic epistemic logic. We introduce several different learning......In dynamic epistemic logic, actions are described using action models. In this paper we introduce a framework for studying learnability of action models from observations. We present first results concerning propositional action models. First we check two basic learnability criteria: finite...... identifiability (conclusively inferring the appropriate action model in finite time) and identifiability in the limit (inconclusive convergence to the right action model). We show that deterministic actions are finitely identifiable, while non-deterministic actions require more learning power...
Geometrical approach to fluid models
Kuvshinov, B. N.; Schep, T. J.
1997-01-01
Differential geometry based upon the Cartan calculus of differential forms is applied to investigate invariant properties of equations that describe the motion of continuous media. The main feature of this approach is that physical quantities are treated as geometrical objects. The geometrical
Geometrical approach to fluid models
Kuvshinov, B. N.; Schep, T. J.
1997-01-01
Differential geometry based upon the Cartan calculus of differential forms is applied to investigate invariant properties of equations that describe the motion of continuous media. The main feature of this approach is that physical quantities are treated as geometrical objects. The geometrical notio
Model based feature fusion approach
Schwering, P.B.W.
2001-01-01
In recent years different sensor data fusion approaches have been analyzed and evaluated in the field of mine detection. In various studies comparisons have been made between different techniques. Although claims can be made for advantages for using certain techniques, until now there has been no si
Closed World Assumption for Disjunctive Reasoning
Institute of Scientific and Technical Information of China (English)
WANG Kewen; ZHOU Lizhu
2001-01-01
In this paper, the relationship between argumentation and closed world reasoning for disjunctive information is studied. In particular, the authors propose a simple and intuitive generalization of the closed world assumption (CWA) for general disjunctive deductive databases (with default negation). This semantics,called DCWA, allows a natural argumentation-based interpretation and can be used to represent reasoning for disjunctive information. We compare DCWA with GCWA and prove that DCWA extends Minker's GCWA to the class of disjunctive databases with default negation. Also we compare our semantics with some related approaches.In addition, the computational complexity of DCWA is investigated.
Global energy modeling - A biophysical approach
Energy Technology Data Exchange (ETDEWEB)
Dale, Michael
2010-09-15
This paper contrasts the standard economic approach to energy modelling with energy models using a biophysical approach. Neither of these approaches includes changing energy-returns-on-investment (EROI) due to declining resource quality or the capital intensive nature of renewable energy sources. Both of these factors will become increasingly important in the future. An extension to the biophysical approach is outlined which encompasses a dynamic EROI function that explicitly incorporates technological learning. The model is used to explore several scenarios of long-term future energy supply especially concerning the global transition to renewable energy sources in the quest for a sustainable energy system.
A POMDP approach to Affective Dialogue Modeling
Bui Huu Trung, B.H.T.; Poel, Mannes; Nijholt, Antinus; Zwiers, Jakob; Keller, E.; Marinaro, M.; Bratanic, M.
2007-01-01
We propose a novel approach to developing a dialogue model that is able to take into account some aspects of the user's affective state and to act appropriately. Our dialogue model uses a Partially Observable Markov Decision Process approach with observations composed of the observed user's
The chronic diseases modelling approach
Hoogenveen RT; Hollander AEM de; Genugten MLL van; CCM
1998-01-01
A mathematical model structure is described that can be used to simulate the changes of the Dutch public health state over time. The model is based on the concept of demographic and epidemiologic processes (events) and is mathematically based on the lifetable method. The population is divided over s
A covariate-adjustment regression model approach to noninferiority margin definition.
Nie, Lei; Soon, Guoxing
2010-05-10
To maintain the interpretability of the effect of experimental treatment (EXP) obtained from a noninferiority trial, current statistical approaches often require the constancy assumption. This assumption typically requires that the control treatment effect in the population of the active control trial is the same as its effect presented in the population of the historical trial. To prevent constancy assumption violation, clinical trial sponsors were recommended to make sure that the design of the active control trial is as close to the design of the historical trial as possible. However, these rigorous requirements are rarely fulfilled in practice. The inevitable discrepancies between the historical trial and the active control trial have led to debates on many controversial issues. Without support from a well-developed quantitative method to determine the impact of the discrepancies on the constancy assumption violation, a correct judgment seems difficult. In this paper, we present a covariate-adjustment generalized linear regression model approach to achieve two goals: (1) to quantify the impact of population difference between the historical trial and the active control trial on the degree of constancy assumption violation and (2) to redefine the active control treatment effect in the active control trial population if the quantification suggests an unacceptable violation. Through achieving goal (1), we examine whether or not a population difference leads to an unacceptable violation. Through achieving goal (2), we redefine the noninferiority margin if the violation is unacceptable. This approach allows us to correctly determine the effect of EXP in the noninferiority trial population when constancy assumption is violated due to the population difference. We illustrate the covariate-adjustment approach through a case study.
Learning Actions Models: Qualitative Approach
DEFF Research Database (Denmark)
Bolander, Thomas; Gierasimczuk, Nina
2015-01-01
identifiability (conclusively inferring the appropriate action model in finite time) and identifiability in the limit (inconclusive convergence to the right action model). We show that deterministic actions are finitely identifiable, while non-deterministic actions require more learning power......—they are identifiable in the limit.We then move on to a particular learning method, which proceeds via restriction of a space of events within a learning-specific action model. This way of learning closely resembles the well-known update method from dynamic epistemic logic. We introduce several different learning...
A Unified Approach to Modeling and Programming
DEFF Research Database (Denmark)
Madsen, Ole Lehrmann; Møller-Pedersen, Birger
2010-01-01
of this paper is to go back to the future and get inspiration from SIMULA and propose a unied approach. In addition to reintroducing the contributions of SIMULA and the Scandinavian approach to object-oriented programming, we do this by discussing a number of issues in modeling and programming and argue3 why we......SIMULA was a language for modeling and programming and provided a unied approach to modeling and programming in contrast to methodologies based on structured analysis and design. The current development seems to be going in the direction of separation of modeling and programming. The goal...
Qian, Song S; Craig, J Kevin; Baustian, Melissa M; Rabalais, Nancy N
2009-12-01
We introduce the Bayesian hierarchical modeling approach for analyzing observational data from marine ecological studies using a data set intended for inference on the effects of bottom-water hypoxia on macrobenthic communities in the northern Gulf of Mexico off the coast of Louisiana, USA. We illustrate (1) the process of developing a model, (2) the use of the hierarchical model results for statistical inference through innovative graphical presentation, and (3) a comparison to the conventional linear modeling approach (ANOVA). Our results indicate that the Bayesian hierarchical approach is better able to detect a "treatment" effect than classical ANOVA while avoiding several arbitrary assumptions necessary for linear models, and is also more easily interpreted when presented graphically. These results suggest that the hierarchical modeling approach is a better alternative than conventional linear models and should be considered for the analysis of observational field data from marine systems.
Nielsen, Simon; Wilms, L. Inge
2015-01-01
We examined the effects of normal aging on visual cognition in a sample of 112 healthy adults aged 60–75. A testbattery was designed to capture high-level measures of visual working memory and low-level measures of visuospatial attention and memory. To answer questions of how cognitive aging affects specific aspects of visual processing capacity, we used confirmatory factor analyses in Structural Equation Modeling (SEM; Model 2), informed by functional structures that were modeled with path a...
Exploring gravitational statistics not based on quantum dynamical assumptions
Mandrin, P A
2016-01-01
Despite considerable progress in several approaches to quantum gravity, there remain uncertainties on the conceptual level. One issue concerns the different roles played by space and time in the canonical quantum formalism. This issue occurs because the Hamilton-Jacobi dynamics is being quantised. The question then arises whether additional physically relevant states could exist which cannot be represented in the canonical form or as a partition function. For this reason, the author has explored a statistical approach (NDA) which is not based on quantum dynamical assumptions and does not require space-time splitting boundary conditions either. For dimension 3+1 and under thermal equilibrium, NDA simplifies to a path integral model. However, the general case of NDA cannot be written as a partition function. As a test of NDA, one recovers general relativity at low curvature and quantum field theory in the flat space-time approximation. Related paper: arxiv:1505.03719.
Challenged assumptions and invisible effects
DEFF Research Database (Denmark)
Wimmelmann, Camilla Lawaetz; Vitus, Kathrine; Jervelund, Signe Smith
2017-01-01
for the implementation—different from the assumed conditions—not only challenge the implementation of the intervention but also potentially produce unanticipated yet valuable effects. Research implications – Newly arrived immigrants represent a hugely diverse and heterogeneous group of people with differing values...... of two complete intervention courses and an analysis of the official intervention documents. Findings – This case study exemplifies how the basic normative assumptions behind an immigrant-oriented intervention and the intrinsic power relations therein may be challenged and negotiated by the participants....... In particular, the assumed (power) relations inherent in immigrant-oriented educational health interventions, in which immigrants are in a novice position, are challenged, as the immigrants are experienced adults (and parents) in regard to healthcare. The paper proposes that such unexpected conditions...
van der Sluijs, Jeroen; Wardekker, Arjan
2015-01-01
In order to enable anticipation and proactive adaptation, local decision makers increasingly seek detailed foresight about regional and local impacts of climate change. To this end, the Netherlands Models and Data-Centre implemented a pilot chain of sequentially linked models to project local climat
Faulty assumptions for repository requirements
Energy Technology Data Exchange (ETDEWEB)
Sutcliffe, W G
1999-06-03
Long term performance requirements for a geologic repository for spent nuclear fuel and high-level waste are based on assumptions concerning water use and subsequent deaths from cancer due to ingesting water contaminated with radio isotopes ten thousand years in the future. This paper argues that the assumptions underlying these requirements are faulty for a number of reasons. First, in light of the inevitable technological progress, including efficient desalination of water, over the next ten thousand years, it is inconceivable that a future society would drill for water near a repository. Second, even today we would not use water without testing its purity. Third, today many types of cancer are curable, and with the rapid progress in medical technology in general, and the prevention and treatment of cancer in particular, it is improbable that cancer caused by ingesting contaminated water will be a sign&ant killer in the far future. This paper reviews the performance requirements for geological repositories and comments on the difficulties in proving compliance in the face of inherent uncertainties. The already tiny long-term risk posed by a geologic repository is presented and contrasted with contemporary every day risks. A number of examples of technological progress, including cancer treatments, are advanced. The real and significant costs resulting from the overly conservative requirements are then assessed. Examples are given of how money (and political capital) could be put to much better use to save lives today and in the future. It is concluded that although a repository represents essentially no long-term risk, monitored retrievable dry storage (above or below ground) is the current best alternative for spent fuel and high-level nuclear waste.
Szekeres models: a covariant approach
Apostolopoulos, Pantelis S
2016-01-01
We exploit the 1+1+2 formalism to covariantly describe the inhomogeneous and anisotropic Szekeres models. It is shown that an \\emph{average scale length} can be defined \\emph{covariantly} which satisfies a 2d equation of motion driven from the \\emph{effective gravitational mass} (EGM) contained in the dust cloud. The contributions to the EGM are encoded to the energy density of the dust fluid and the free gravitational field $E_{ab}$. In addition the notions of the Apparent and Absolute Apparent Horizons are briefly discussed and we give an alternative gauge-invariant form to define them in terms of the kinematical variables of the spacelike congruences. We argue that the proposed program can be used in order to express the Sachs optical equations in a covariant form and analyze the confrontation of a spatially inhomogeneous irrotational overdense fluid model with the observational data.
Matrix Model Approach to Cosmology
Chaney, A; Stern, A
2015-01-01
We perform a systematic search for rotationally invariant cosmological solutions to matrix models, or more specifically the bosonic sector of Lorentzian IKKT-type matrix models, in dimensions $d$ less than ten, specifically $d=3$ and $d=5$. After taking a continuum (or commutative) limit they yield $d-1$ dimensional space-time surfaces, with an attached Poisson structure, which can be associated with closed, open or static cosmologies. For $d=3$, we obtain recursion relations from which it is possible to generate rotationally invariant matrix solutions which yield open universes in the continuum limit. Specific examples of matrix solutions have also been found which are associated with closed and static two-dimensional space-times in the continuum limit. The solutions provide for a matrix resolution of cosmological singularities. The commutative limit reveals other desirable features, such as a solution describing a smooth transition from an initial inflation to a noninflationary era. Many of the $d=3$ soluti...
Matthews, S.; Lovell, M.; Davies, S. J.; Pritchard, T.; Sirju, C.; Abdelkarim, A.
2012-12-01
Heterolithic or 'shaly' sandstone reservoirs constitute a significant proportion of hydrocarbon resources. Petroacoustic models (a combination of petrophysics and rock physics) enhance the ability to extract reservoir properties from seismic data, providing a connection between seismic and fine-scale rock properties. By incorporating sedimentological observations these models can be better constrained and improved. Petroacoustic modelling is complicated by the unpredictable effects of clay minerals and clay-sized particles on geophysical properties. Such effects are responsible for erroneous results when models developed for "clean" reservoirs - such as Gassmann's equation (Gassmann, 1951) - are applied to heterolithic sandstone reservoirs. Gassmann's equation is arguably the most popular petroacoustic modelling technique in the hydrocarbon industry and is used to model elastic effects of changing reservoir fluid saturations. Successful implementation of Gassmann's equation requires well-constrained drained rock frame properties, which in heterolithic sandstones are heavily influenced by reservoir sedimentology, particularly clay distribution. The prevalent approach to categorising clay distribution is based on the Thomas - Stieber model (Thomas & Stieber, 1975), this approach is inconsistent with current understanding of 'shaly sand' sedimentology and omits properties such as sorting and grain size. The novel approach presented here demonstrates that characterising reservoir sedimentology constitutes an important modelling phase. As well as incorporating sedimentological constraints, this novel approach also aims to improve drained frame moduli estimates through more careful consideration of Gassmann's model assumptions and limitations. A key assumption of Gassmann's equation is a pore space in total communication with movable fluids. This assumption is often violated by conventional applications in heterolithic sandstone reservoirs where effective porosity, which
A new approach to adaptive data models
Directory of Open Access Journals (Sweden)
Ion LUNGU
2016-12-01
Full Text Available Over the last decade, there has been a substantial increase in the volume and complexity of data we collect, store and process. We are now aware of the increasing demand for real time data processing in every continuous business process that evolves within the organization. We witness a shift from a traditional static data approach to a more adaptive model approach. This article aims to extend understanding in the field of data models used in information systems by examining how an adaptive data model approach for managing business processes can help organizations accommodate on the fly and build dynamic capabilities to react in a dynamic environment.
Modeling software behavior a craftsman's approach
Jorgensen, Paul C
2009-01-01
A common problem with most texts on requirements specifications is that they emphasize structural models to the near exclusion of behavioral models-focusing on what the software is, rather than what it does. If they do cover behavioral models, the coverage is brief and usually focused on a single model. Modeling Software Behavior: A Craftsman's Approach provides detailed treatment of various models of software behavior that support early analysis, comprehension, and model-based testing. Based on the popular and continually evolving course on requirements specification models taught by the auth
Energy Technology Data Exchange (ETDEWEB)
Mendoza, V.M.; Villanueva, E.E.; Garduno, R.; Adem, J. [Centro de Ciencias de la Atmosfera, Mexico (Mexico)
1995-12-31
General circulation models (GCMs) and energy balance models (EBMs) are the best way to simulate the complex large-scale dynamic and thermodynamic processes in the atmosphere. These models have been used to estimate the global warming due to an increase of atmospheric CO{sub 2}. In Japan Ohta with coworkers has developed a physical model based on the conservation of thermal energy applied to pounded shallow water, to compute the change in the water temperature, using the atmospheric warming and the precipitation due to the increase in the atmospheric CO{sub 2} computed by the GISS-GCM. In this work, a method similar to the Ohta`s one is used for computing the change in ground temperature, soil moisture, evaporation, runoff and dryness index in eleven hydrological zones, using in this case the surface air temperature and precipitation due to CO{sub 2} doubling, computed by the GFDLR30-GCM and the version of the Adem thermodynamic climate model (CTM-EBM), which contains the three feedbacks (cryosphere, clouds and water vapor), and does not include water vapor in the CO{sub 2} atmospheric spectral band (12-19{mu})
Institute of Scientific and Technical Information of China (English)
HUANG Shao'an; SUN Tao
2006-01-01
Orthodox consumption theories have not incorporated the overlapping-generations (OLG) model and wealth-stock model,whereas this article explains households' characters in consumption and savings in countries such as China and some other regions from the viewpoints of social convention,moral formation,ethics,and other informal institutions.The authors exploit and extend the OLG model,introduce the concepts of bequest,gift,and wealth preference to the economic agent's utility function,then apply optimal conditions to analyzing the characters and problems concerning consumption and savings behavior.Furthermore,they deliberate on the effects of this analysis on government macroeconomic policies and suggest some relevant theoretical thinking and solutions.
Morrison, Diane M; Golder, Seana; Keller, Thomas E; Gillmore, Mary Rogers
2002-09-01
The theory of reasoned action (TRA) is used to model decisions about substance use among young mothers who became premaritally pregnant at age 17 or younger. The results of structural equation modeling to test the TRA indicated that most relationships specified by the model were significant and in the predicted direction. Attitude was a stronger predictor of intention than norm, but both were significantly related to intention, and intention was related to actual marijuana use 6 months later. Outcome beliefs were bidimensional, and positive outcome beliefs, but not negative beliefs, were significantly related to attitude. Prior marijuana use was only partially mediated by the TRA variables; it also was directly related to intentions to use marijuana and to subsequent use.
DEFF Research Database (Denmark)
Sunyer Pinya, Maria Antonia; Madsen, Henrik; Rosbjerg, Dan
2014-01-01
in climate model biases are negligible. This study develops a Bayesian framework that accounts for model dependencies and changes in model biases and compares it to estimates calculated based on a frequentist approach. The Bayesian framework is used to investigate the effects of the two assumptions......Climate change impact studies are subject to numerous uncertainties and assumptions. One of the main sources of uncertainty arises from the interpretation of climate model projections. Probabilistic procedures based on multimodel ensembles have been suggested in the literature to quantify...... this source of uncertainty. However, the interpretation of multimodel ensembles remains challenging. Several assumptions are often required in the uncertainty quantification of climate model projections. For example, most methods often assume that the climate models are independent and/or that changes...
Current approaches to gene regulatory network modelling
Directory of Open Access Journals (Sweden)
Brazma Alvis
2007-09-01
Full Text Available Abstract Many different approaches have been developed to model and simulate gene regulatory networks. We proposed the following categories for gene regulatory network models: network parts lists, network topology models, network control logic models, and dynamic models. Here we will describe some examples for each of these categories. We will study the topology of gene regulatory networks in yeast in more detail, comparing a direct network derived from transcription factor binding data and an indirect network derived from genome-wide expression data in mutants. Regarding the network dynamics we briefly describe discrete and continuous approaches to network modelling, then describe a hybrid model called Finite State Linear Model and demonstrate that some simple network dynamics can be simulated in this model.
Reichle, E.D.; Tokowicz, N.; Liu, Y.; Perfetti, C.A.
2011-01-01
According to the E-Z Reader model of eye-movement control, the completion of an early stage of lexical processing, the familiarity check, causes the eyes to move forward during reading (Reichle, Pollatsek, Fisher, & Rayner, 1998). Here, we report an event-related potential (ERP
Model Oriented Approach for Industrial Software Development
Directory of Open Access Journals (Sweden)
P. D. Drobintsev
2015-01-01
Full Text Available The article considers the specifics of a model oriented approach to software development based on the usage of Model Driven Architecture (MDA, Model Driven Software Development (MDSD and Model Driven Development (MDD technologies. Benefits of this approach usage in the software development industry are described. The main emphasis is put on the system design, automated code generation for large systems, verification, proof of system properties and reduction of bug density. Drawbacks of the approach are also considered. The approach proposed in the article is specific for industrial software systems development. These systems are characterized by different levels of abstraction, which is used on modeling and code development phases. The approach allows to detail the model to the level of the system code, at the same time store the verified model semantics and provide the checking of the whole detailed model. Steps of translating abstract data structures (including transactions, signals and their parameters into data structures used in detailed system implementation are presented. Also the grammar of a language for specifying rules of abstract model data structures transformation into real system detailed data structures is described. The results of applying the proposed method in the industrial technology are shown.The article is published in the authors’ wording.
Distributed simulation a model driven engineering approach
Topçu, Okan; Oğuztüzün, Halit; Yilmaz, Levent
2016-01-01
Backed by substantive case studies, the novel approach to software engineering for distributed simulation outlined in this text demonstrates the potent synergies between model-driven techniques, simulation, intelligent agents, and computer systems development.
Review of modeling approaches for emergency department patient flow and crowding research.
Wiler, Jennifer L; Griffey, Richard T; Olsen, Tava
2011-12-01
Emergency department (ED) crowding is an international phenomenon that continues to challenge operational efficiency. Many statistical modeling approaches have been offered to describe, and at times predict, ED patient load and crowding. A number of formula-based equations, regression models, time-series analyses, queuing theory-based models, and discrete-event (or process) simulation (DES) models have been proposed. In this review, we compare and contrast these modeling methodologies, describe the fundamental assumptions each makes, and outline the potential applications and limitations for each with regard to usability in ED operations and in ED operations and crowding research. © 2011 by the Society for Academic Emergency Medicine.
Reichle, Erik D; Tokowicz, Natasha; Liu, Ying; Perfetti, Charles A
2011-07-01
According to the E-Z Reader model of eye-movement control, the completion of an early stage of lexical processing, the familiarity check, causes the eyes to move forward during reading (Reichle, Pollatsek, Fisher, & Rayner, 1998). Here, we report an event-related potential (ERP) experiment designed to examine the hypothesized familiarity check at the electrophysiological level. The results indicate ERP components modulated by word frequency at the time of the predicted familiarity check. These findings are consistent with the hypothesis that an early stage of lexical processing is linked to the "decisions" about when to move the eyes during reading. Copyright © 2011 Society for Psychophysiological Research.
A multi-model approach to X-ray pulsars
Directory of Open Access Journals (Sweden)
Schönherr G.
2014-01-01
Full Text Available The emission characteristics of X-ray pulsars are governed by magnetospheric accretion within the Alfvén radius, leading to a direct coupling of accretion column properties and interactions at the magnetosphere. The complexity of the physical processes governing the formation of radiation within the accreted, strongly magnetized plasma has led to several sophisticated theoretical modelling efforts over the last decade, dedicated to either the formation of the broad band continuum, the formation of cyclotron resonance scattering features (CRSFs or the formation of pulse profiles. While these individual approaches are powerful in themselves, they quickly reach their limits when aiming at a quantitative comparison to observational data. Too many fundamental parameters, describing the formation of the accretion columns and the systems’ overall geometry are unconstrained and different models are often based on different fundamental assumptions, while everything is intertwined in the observed, highly phase-dependent spectra and energy-dependent pulse profiles. To name just one example: the (phase variable line width of the CRSFs is highly dependent on the plasma temperature, the existence of B-field gradients (geometry and observation angle, parameters which, in turn, drive the continuum radiation and are driven by the overall two-pole geometry for the light bending model respectively. This renders a parallel assessment of all available spectral and timing information by a compatible across-models-approach indispensable. In a collaboration of theoreticians and observers, we have been working on a model unification project over the last years, bringing together theoretical calculations of the Comptonized continuum, Monte Carlo simulations and Radiation Transfer calculations of CRSFs as well as a General Relativity (GR light bending model for ray tracing of the incident emission pattern from both magnetic poles. The ultimate goal is to implement a
Kerfriden, Pierre; Goury, Olivier; Khac Chi, Hoang; Bordas, Stéphane
2014-01-01
Computational homogenisation is a widely spread technique to calculate the overall properties of a composite material from the knowledge of the constitutive laws of its microscopic constituents [1, 2]. Indeed, it relies on fewer assumptions than analytical or semi-analytical homogenisation approaches and can be used to coarse-grain a large range of micro-mechanical models. However, this accuracy comes at large computational costs, which prevents computational homogenisation from b...
A Set Theoretical Approach to Maturity Models
DEFF Research Database (Denmark)
Lasrado, Lester; Vatrapu, Ravi; Andersen, Kim Normann
2016-01-01
Maturity Model research in IS has been criticized for the lack of theoretical grounding, methodological rigor, empirical validations, and ignorance of multiple and non-linear paths to maturity. To address these criticisms, this paper proposes a novel set-theoretical approach to maturity models ch...
How Symmetrical Assumptions Advance Strategic Management Research
DEFF Research Database (Denmark)
Foss, Nicolai Juul; Hallberg, Hallberg
2014-01-01
We develop the case for symmetrical assumptions in strategic management theory. Assumptional symmetry obtains when assumptions made about certain actors and their interactions in one of the application domains of a theory are also made about this set of actors and their interactions in other appl...
Caricchi, Luca; Simpson, Guy; Schaltegger, Urs
2016-04-01
Magma fluxes in the Earth's crust play an important role in regulating the relationship between the frequency and magnitude of volcanic eruptions, the chemical evolution of magmatic systems and the distribution of geothermal energy and mineral resources on our planet. Therefore, quantifying magma productivity and the rate of magma transfer within the crust can provide valuable insights to characterise the long-term behaviour of volcanic systems and to unveil the link between the physical and chemical evolution of magmatic systems and their potential to generate resources. We performed thermal modelling to compute the temperature evolution of crustal magmatic intrusions with different final volumes assembled over a variety of timescales (i.e., at different magma fluxes). Using these results, we calculated synthetic populations of zircon ages assuming the number of zircons crystallising in a given time period is directly proportional to the volume of magma at temperature within the zircon crystallisation range. The statistical analysis of the calculated populations of zircon ages shows that the mode, median and standard deviation of the populations varies coherently as function of the rate of magma injection and final volume of the crustal intrusions. Therefore, the statistical properties of the population of zircon ages can add useful constraints to quantify the rate of magma injection and the final volume of magmatic intrusions. Here, we explore the effect of different ranges of zircon saturation temperature, intrusion geometry, and wall rock temperature on the calculated distributions of zircon ages. Additionally, we determine the effect of undersampling on the variability of mode, median and standards deviation of calculated populations of zircon ages to estimate the minimum number of zircon analyses necessary to obtain meaningful estimates of magma flux and final intrusion volume.
Directory of Open Access Journals (Sweden)
Luca eCaricchi
2016-04-01
Full Text Available Magma fluxes in the Earth’s crust play an important role in regulating the relationship between the frequency and magnitude of volcanic eruptions, the chemical evolution of magmatic systems and the distribution of geothermal energy and mineral resources on our planet. Therefore, quantifying magma productivity and the rate of magma transfer within the crust can provide valuable insights to characterise the long-term behaviour of volcanic systems and to unveil the link between the physical and chemical evolution of magmatic systems and their potential to generate resources. We performed thermal modelling to compute the temperature evolution of crustal magmatic intrusions with different final volumes assembled over a variety of timescales (i.e., at different magma fluxes. Using these results, we calculated synthetic populations of zircon ages assuming the number of zircons crystallising in a given time period is directly proportional to the volume of magma at temperature within the zircon crystallisation range. The statistical analysis of the calculated populations of zircon ages shows that the mode, median and standard deviation of the populations varies coherently as function of the rate of magma injection and final volume of the crustal intrusions. Therefore, the statistical properties of the population of zircon ages can add useful constraints to quantify the rate of magma injection and the final volume of magmatic intrusions.Here, we explore the effect of different ranges of zircon saturation temperature, intrusion geometry, and wall rock temperature on the calculated distributions of zircon ages. Additionally, we determine the effect of undersampling on the variability of mode, median and standards deviation of calculated populations of zircon ages to estimate the minimum number of zircon analyses necessary to obtain meaningful estimates of magma flux and final intrusion volume.
Liu, Siwei; Rovine, Michael J; Molenaar, Peter C M
2012-03-01
With increasing popularity, growth curve modeling is more and more often considered as the 1st choice for analyzing longitudinal data. Although the growth curve approach is often a good choice, other modeling strategies may more directly answer questions of interest. It is common to see researchers fit growth curve models without considering alterative modeling strategies. In this article we compare 3 approaches for analyzing longitudinal data: repeated measures analysis of variance, covariance pattern models, and growth curve models. As all are members of the general linear mixed model family, they represent somewhat different assumptions about the way individuals change. These assumptions result in different patterns of covariation among the residuals around the fixed effects. In this article, we first indicate the kinds of data that are appropriately modeled by each and use real data examples to demonstrate possible problems associated with the blanket selection of the growth curve model. We then present a simulation that indicates the utility of Akaike information criterion and Bayesian information criterion in the selection of a proper residual covariance structure. The results cast doubt on the popular practice of automatically using growth curve modeling for longitudinal data without comparing the fit of different models. Finally, we provide some practical advice for assessing mean changes in the presence of correlated data.
Post-closure biosphere assessment modelling: comparison of complex and more stylised approaches.
Walke, Russell C; Kirchner, Gerald; Xu, Shulan; Dverstorp, Björn
2015-10-01
Geological disposal facilities are the preferred option for high-level radioactive waste, due to their potential to provide isolation from the surface environment (biosphere) on very long timescales. Assessments need to strike a balance between stylised models and more complex approaches that draw more extensively on site-specific information. This paper explores the relative merits of complex versus more stylised biosphere models in the context of a site-specific assessment. The more complex biosphere modelling approach was developed by the Swedish Nuclear Fuel and Waste Management Co (SKB) for the Formark candidate site for a spent nuclear fuel repository in Sweden. SKB's approach is built on a landscape development model, whereby radionuclide releases to distinct hydrological basins/sub-catchments (termed 'objects') are represented as they evolve through land rise and climate change. Each of seventeen of these objects is represented with more than 80 site specific parameters, with about 22 that are time-dependent and result in over 5000 input values per object. The more stylised biosphere models developed for this study represent releases to individual ecosystems without environmental change and include the most plausible transport processes. In the context of regulatory review of the landscape modelling approach adopted in the SR-Site assessment in Sweden, the more stylised representation has helped to build understanding in the more complex modelling approaches by providing bounding results, checking the reasonableness of the more complex modelling, highlighting uncertainties introduced through conceptual assumptions and helping to quantify the conservatisms involved. The more stylised biosphere models are also shown capable of reproducing the results of more complex approaches. A major recommendation is that biosphere assessments need to justify the degree of complexity in modelling approaches as well as simplifying and conservative assumptions. In light of
Modeling diffuse pollution with a distributed approach.
León, L F; Soulis, E D; Kouwen, N; Farquhar, G J
2002-01-01
The transferability of parameters for non-point source pollution models to other watersheds, especially those in remote areas without enough data for calibration, is a major problem in diffuse pollution modeling. A water quality component was developed for WATFLOOD (a flood forecast hydrological model) to deal with sediment and nutrient transport. The model uses a distributed group response unit approach for water quantity and quality modeling. Runoff, sediment yield and soluble nutrient concentrations are calculated separately for each land cover class, weighted by area and then routed downstream. The distributed approach for the water quality model for diffuse pollution in agricultural watersheds is described in this paper. Integrating the model with data extracted using GIS technology (Geographical Information Systems) for a local watershed, the model is calibrated for the hydrologic response and validated for the water quality component. With the connection to GIS and the group response unit approach used in this paper, model portability increases substantially, which will improve non-point source modeling at the watershed scale level.
MODULAR APPROACH WITH ROUGH DECISION MODELS
Directory of Open Access Journals (Sweden)
Ahmed T. Shawky
2012-09-01
Full Text Available Decision models which adopt rough set theory have been used effectively in many real world applications.However, rough decision models suffer the high computational complexity when dealing with datasets ofhuge size. In this research we propose a new rough decision model that allows making decisions based onmodularity mechanism. According to the proposed approach, large-size datasets can be divided intoarbitrary moderate-size datasets, then a group of rough decision models can be built as separate decisionmodules. The overall model decision is computed as the consensus decision of all decision modulesthrough some aggregation technique. This approach provides a flexible and a quick way for extractingdecision rules of large size information tables using rough decision models.
Modular Approach with Rough Decision Models
Directory of Open Access Journals (Sweden)
Ahmed T. Shawky
2012-10-01
Full Text Available Decision models which adopt rough set theory have been used effectively in many real world applications.However, rough decision models suffer the high computational complexity when dealing with datasets ofhuge size. In this research we propose a new rough decision model that allows making decisions based onmodularity mechanism. According to the proposed approach, large-size datasets can be divided intoarbitrary moderate-size datasets, then a group of rough decision models can be built as separate decisionmodules. The overall model decision is computed as the consensus decision of all decision modulesthrough some aggregation technique. This approach provides a flexible and a quick way for extractingdecision rules of large size information tables using rough decision models.
Numerical modeling of axi-symmetrical cold forging process by ``Pseudo Inverse Approach''
Halouani, A.; Li, Y. M.; Abbes, B.; Guo, Y. Q.
2011-05-01
The incremental approach is widely used for the forging process modeling, it gives good strain and stress estimation, but it is time consuming. A fast Inverse Approach (IA) has been developed for the axi-symmetric cold forging modeling [1-2]. This approach exploits maximum the knowledge of the final part's shape and the assumptions of proportional loading and simplified tool actions make the IA simulation very fast. The IA is proved very useful for the tool design and optimization because of its rapidity and good strain estimation. However, the assumptions mentioned above cannot provide good stress estimation because of neglecting the loading history. A new approach called "Pseudo Inverse Approach" (PIA) was proposed by Batoz, Guo et al.. [3] for the sheet forming modeling, which keeps the IA's advantages but gives good stress estimation by taking into consideration the loading history. Our aim is to adapt the PIA for the cold forging modeling in this paper. The main developments in PIA are resumed as follows: A few intermediate configurations are generated for the given tools' positions to consider the deformation history; the strain increment is calculated by the inverse method between the previous and actual configurations. An incremental algorithm of the plastic integration is used in PIA instead of the total constitutive law used in the IA. An example is used to show the effectiveness and limitations of the PIA for the cold forging process modeling.
Modeling approach suitable for energy system
Energy Technology Data Exchange (ETDEWEB)
Goetschel, D. V.
1979-01-01
Recently increased attention has been placed on optimization problems related to the determination and analysis of operating strategies for energy systems. Presented in this paper is a nonlinear model that can be used in the formulation of certain energy-conversion systems-modeling problems. The model lends itself nicely to solution approaches based on nonlinear-programming algorithms and, in particular, to those methods falling into the class of variable metric algorithms for nonlinearly constrained optimization.
Stormwater infiltration trenches: a conceptual modelling approach.
Freni, Gabriele; Mannina, Giorgio; Viviani, Gaspare
2009-01-01
In recent years, limitations linked to traditional urban drainage schemes have been pointed out and new approaches are developing introducing more natural methods for retaining and/or disposing of stormwater. These mitigation measures are generally called Best Management Practices or Sustainable Urban Drainage System and they include practices such as infiltration and storage tanks in order to reduce the peak flow and retain part of the polluting components. The introduction of such practices in urban drainage systems entails an upgrade of existing modelling frameworks in order to evaluate their efficiency in mitigating the impact of urban drainage systems on receiving water bodies. While storage tank modelling approaches are quite well documented in literature, some gaps are still present about infiltration facilities mainly dependent on the complexity of the involved physical processes. In this study, a simplified conceptual modelling approach for the simulation of the infiltration trenches is presented. The model enables to assess the performance of infiltration trenches. The main goal is to develop a model that can be employed for the assessment of the mitigation efficiency of infiltration trenches in an integrated urban drainage context. Particular care was given to the simulation of infiltration structures considering the performance reduction due to clogging phenomena. The proposed model has been compared with other simplified modelling approaches and with a physically based model adopted as benchmark. The model performed better compared to other approaches considering both unclogged facilities and the effect of clogging. On the basis of a long-term simulation of six years of rain data, the performance and the effectiveness of an infiltration trench measure are assessed. The study confirmed the important role played by the clogging phenomenon on such infiltration structures.
Challenges in structural approaches to cell modeling.
Im, Wonpil; Liang, Jie; Olson, Arthur; Zhou, Huan-Xiang; Vajda, Sandor; Vakser, Ilya A
2016-07-31
Computational modeling is essential for structural characterization of biomolecular mechanisms across the broad spectrum of scales. Adequate understanding of biomolecular mechanisms inherently involves our ability to model them. Structural modeling of individual biomolecules and their interactions has been rapidly progressing. However, in terms of the broader picture, the focus is shifting toward larger systems, up to the level of a cell. Such modeling involves a more dynamic and realistic representation of the interactomes in vivo, in a crowded cellular environment, as well as membranes and membrane proteins, and other cellular components. Structural modeling of a cell complements computational approaches to cellular mechanisms based on differential equations, graph models, and other techniques to model biological networks, imaging data, etc. Structural modeling along with other computational and experimental approaches will provide a fundamental understanding of life at the molecular level and lead to important applications to biology and medicine. A cross section of diverse approaches presented in this review illustrates the developing shift from the structural modeling of individual molecules to that of cell biology. Studies in several related areas are covered: biological networks; automated construction of three-dimensional cell models using experimental data; modeling of protein complexes; prediction of non-specific and transient protein interactions; thermodynamic and kinetic effects of crowding; cellular membrane modeling; and modeling of chromosomes. The review presents an expert opinion on the current state-of-the-art in these various aspects of structural modeling in cellular biology, and the prospects of future developments in this emerging field. Copyright © 2016 Elsevier Ltd. All rights reserved.
Finch, Holmes; Edwards, Julianne M.
2016-01-01
Standard approaches for estimating item response theory (IRT) model parameters generally work under the assumption that the latent trait being measured by a set of items follows the normal distribution. Estimation of IRT parameters in the presence of nonnormal latent traits has been shown to generate biased person and item parameter estimates. A…
39 Questionable Assumptions in Modern Physics
Volk, Greg
2009-03-01
The growing body of anomalies in new energy, low energy nuclear reactions, astrophysics, atomic physics, and entanglement, combined with the failure of the Standard Model and string theory to predict many of the most basic fundamental phenomena, all point to a need for major new paradigms. Not Band-Aids, but revolutionary new ways of conceptualizing physics, in the spirit of Thomas Kuhn's The Structure of Scientific Revolutions. This paper identifies a number of long-held, but unproven assumptions currently being challenged by an increasing number of alternative scientists. Two common themes, both with venerable histories, keep recurring in the many alternative theories being proposed: (1) Mach's Principle, and (2) toroidal, vortex particles. Matter-based Mach's Principle differs from both space-based universal frames and observer-based Einsteinian relativity. Toroidal particles, in addition to explaining electron spin and the fundamental constants, satisfy the basic requirement of Gauss's misunderstood B Law, that motion itself circulates. Though a comprehensive theory is beyond the scope of this paper, it will suggest alternatives to the long list of assumptions in context.
An integrated biomechanical modeling approach to the ergonomic evaluation of drywall installation.
Yuan, Lu; Buchholz, Bryan; Punnett, Laura; Kriebel, David
2016-03-01
Three different methodologies: work sampling, computer simulation and biomechanical modeling, were integrated to study the physical demands of drywall installation. PATH (Posture, Activity, Tools, and Handling), a work-sampling based method, was used to quantify the percent of time that the drywall installers were conducting different activities with different body segment (trunk, arm, and leg) postures. Utilizing Monte-Carlo simulation to convert the categorical PATH data into continuous variables as inputs for the biomechanical models, the required muscle contraction forces and joint reaction forces at the low back (L4/L5) and shoulder (glenohumeral and sternoclavicular joints) were estimated for a typical eight-hour workday. To demonstrate the robustness of this modeling approach, a sensitivity analysis was conducted to examine the impact of some quantitative assumptions that have been made to facilitate the modeling approach. The results indicated that the modeling approach seemed to be the most sensitive to both the distribution of work cycles for a typical eight-hour workday and the distribution and values of Euler angles that are used to determine the "shoulder rhythm." Other assumptions including the distribution of trunk postures did not appear to have a significant impact on the model outputs. It was concluded that the integrated approach might provide an applicable examination of physical loads during the non-routine construction work, especially for those operations/tasks that have certain patterns/sequences for the workers to follow.
Building Water Models, A Different Approach
Izadi, Saeed; Onufriev, Alexey V
2014-01-01
Simplified, classical models of water are an integral part of atomistic molecular simulations, especially in biology and chemistry where hydration effects are critical. Yet, despite several decades of effort, these models are still far from perfect. Presented here is an alternative approach to constructing point charge water models - currently, the most commonly used type. In contrast to the conventional approach, we do not impose any geometry constraints on the model other than symmetry. Instead, we optimize the distribution of point charges to best describe the "electrostatics" of the water molecule, which is key to many unusual properties of liquid water. The search for the optimal charge distribution is performed in 2D parameter space of key lowest multipole moments of the model, to find best fit to a small set of bulk water properties at room temperature. A virtually exhaustive search is enabled via analytical equations that relate the charge distribution to the multipole moments. The resulting "optimal"...
Stream of consciousness: Quantum and biochemical assumptions regarding psychopathology.
Tonello, Lucio; Cocchi, Massimo; Gabrielli, Fabio; Tuszynski, Jack A
2017-04-01
The accepted paradigms of mainstream neuropsychiatry appear to be incompletely adequate and in various cases offer equivocal analyses. However, a growing number of new approaches are being proposed that suggest the emergence of paradigm shifts in this area. In particular, quantum theories of mind, brain and consciousness seem to offer a profound change to the current approaches. Unfortunately these quantum paradigms harbor at least two serious problems. First, they are simply models, theories, and assumptions, with no convincing experiments supporting their claims. Second, they deviate from contemporary mainstream views of psychiatric illness and do so in revolutionary ways. We suggest a possible way to integrate experimental neuroscience with quantum models in order to address outstanding issues in psychopathology. A key role is played by the phenomenon called the "stream of consciousness", which can be linked to the so-called "Gamma Synchrony" (GS), which is clearly demonstrated by EEG data. In our novel proposal, a unipolar depressed patient could be seen as a subject with an altered stream of consciousness. In particular, some clues suggest that depression is linked to an "increased power" stream of consciousness. It is additionally suggested that such an approach to depression might be extended to psychopathology in general with potential benefits to diagnostics and therapeutics in neuropsychiatry. Copyright © 2017 Elsevier Ltd. All rights reserved.
Towards new approaches in phenological modelling
Chmielewski, Frank-M.; Götz, Klaus-P.; Rawel, Harshard M.; Homann, Thomas
2014-05-01
Modelling of phenological stages is based on temperature sums for many decades, describing both the chilling and the forcing requirement of woody plants until the beginning of leafing or flowering. Parts of this approach go back to Reaumur (1735), who originally proposed the concept of growing degree-days. Now, there is a growing body of opinion that asks for new methods in phenological modelling and more in-depth studies on dormancy release of woody plants. This requirement is easily understandable if we consider the wide application of phenological models, which can even affect the results of climate models. To this day, in phenological models still a number of parameters need to be optimised on observations, although some basic physiological knowledge of the chilling and forcing requirement of plants is already considered in these approaches (semi-mechanistic models). Limiting, for a fundamental improvement of these models, is the lack of knowledge about the course of dormancy in woody plants, which cannot be directly observed and which is also insufficiently described in the literature. Modern metabolomic methods provide a solution for this problem and allow both, the validation of currently used phenological models as well as the development of mechanistic approaches. In order to develop this kind of models, changes of metabolites (concentration, temporal course) must be set in relation to the variability of environmental (steering) parameters (weather, day length, etc.). This necessarily requires multi-year (3-5 yr.) and high-resolution (weekly probes between autumn and spring) data. The feasibility of this approach has already been tested in a 3-year pilot-study on sweet cherries. Our suggested methodology is not only limited to the flowering of fruit trees, it can be also applied to tree species of the natural vegetation, where even greater deficits in phenological modelling exist.
Likelihood approaches for proportional likelihood ratio model with right-censored data.
Zhu, Hong
2014-06-30
Regression methods for survival data with right censoring have been extensively studied under semiparametric transformation models such as the Cox regression model and the proportional odds model. However, their practical application could be limited because of possible violation of model assumption or lack of ready interpretation for the regression coefficients in some cases. As an alternative, in this paper, the proportional likelihood ratio model introduced by Luo and Tsai is extended to flexibly model the relationship between survival outcome and covariates. This model has a natural connection with many important semiparametric models such as generalized linear model and density ratio model and is closely related to biased sampling problems. Compared with the semiparametric transformation model, the proportional likelihood ratio model is appealing and practical in many ways because of its model flexibility and quite direct clinical interpretation. We present two likelihood approaches for the estimation and inference on the target regression parameters under independent and dependent censoring assumptions. Based on a conditional likelihood approach using uncensored failure times, a numerically simple estimation procedure is developed by maximizing a pairwise pseudo-likelihood. We also develop a full likelihood approach, and the most efficient maximum likelihood estimator is obtained by a profile likelihood. Simulation studies are conducted to assess the finite-sample properties of the proposed estimators and compare the efficiency of the two likelihood approaches. An application to survival data for bone marrow transplantation patients of acute leukemia is provided to illustrate the proposed method and other approaches for handling non-proportionality. The relative merits of these methods are discussed in concluding remarks.
Beyond the crystal ball assumption
DEFF Research Database (Denmark)
Vaucouleur, Sebastien
2008-01-01
trades control for flexibility. Unfortunately, it also makes the customized software product very sensitive to upgrades. We propose a more mitigated solution, that does not require accurate anticipation and yet offers some resilience to evolution of the base software product through the use of code...... quantification. We introduce the Eggther framework for customization of evolvable software products in general and ERP systems in particular. Our approach is based on the concept of code query by example. The technology being developed is based on an initial empirical study on practices around ERP systems. We...... motivate our design choices based on those empirical results, and we show how the proposed solution helps with respect to the upgrade problem....
Directory of Open Access Journals (Sweden)
Merler Stefano
2010-06-01
Full Text Available Abstract Background In recent years large-scale computational models for the realistic simulation of epidemic outbreaks have been used with increased frequency. Methodologies adapt to the scale of interest and range from very detailed agent-based models to spatially-structured metapopulation models. One major issue thus concerns to what extent the geotemporal spreading pattern found by different modeling approaches may differ and depend on the different approximations and assumptions used. Methods We provide for the first time a side-by-side comparison of the results obtained with a stochastic agent-based model and a structured metapopulation stochastic model for the progression of a baseline pandemic event in Italy, a large and geographically heterogeneous European country. The agent-based model is based on the explicit representation of the Italian population through highly detailed data on the socio-demographic structure. The metapopulation simulations use the GLobal Epidemic and Mobility (GLEaM model, based on high-resolution census data worldwide, and integrating airline travel flow data with short-range human mobility patterns at the global scale. The model also considers age structure data for Italy. GLEaM and the agent-based models are synchronized in their initial conditions by using the same disease parameterization, and by defining the same importation of infected cases from international travels. Results The results obtained show that both models provide epidemic patterns that are in very good agreement at the granularity levels accessible by both approaches, with differences in peak timing on the order of a few days. The relative difference of the epidemic size depends on the basic reproductive ratio, R0, and on the fact that the metapopulation model consistently yields a larger incidence than the agent-based model, as expected due to the differences in the structure in the intra-population contact pattern of the approaches. The age
Modelling Coagulation Systems: A Stochastic Approach
Ryazanov, V V
2011-01-01
A general stochastic approach to the description of coagulating aerosol system is developed. As the object of description one can consider arbitrary mesoscopic values (number of aerosol clusters, their size etc). The birth-and-death formalism for a number of clusters can be regarded as a partial case of the generalized storage model. An application of the storage model to the number of monomers in a cluster is discussed.
A Multiple Model Approach to Modeling Based on LPF Algorithm
Institute of Scientific and Technical Information of China (English)
无
2001-01-01
Input-output data fitting methods are often used for unknown-structure nonlinear system modeling. Based on model-on-demand tactics, a multiple model approach to modeling for nonlinear systems is presented. The basic idea is to find out, from vast historical system input-output data sets, some data sets matching with the current working point, then to develop a local model using Local Polynomial Fitting (LPF) algorithm. With the change of working points, multiple local models are built, which realize the exact modeling for the global system. By comparing to other methods, the simulation results show good performance for its simple, effective and reliable estimation.``
Towards a Multiscale Approach to Cybersecurity Modeling
Energy Technology Data Exchange (ETDEWEB)
Hogan, Emilie A.; Hui, Peter SY; Choudhury, Sutanay; Halappanavar, Mahantesh; Oler, Kiri J.; Joslyn, Cliff A.
2013-11-12
We propose a multiscale approach to modeling cyber networks, with the goal of capturing a view of the network and overall situational awareness with respect to a few key properties--- connectivity, distance, and centrality--- for a system under an active attack. We focus on theoretical and algorithmic foundations of multiscale graphs, coming from an algorithmic perspective, with the goal of modeling cyber system defense as a specific use case scenario. We first define a notion of \\emph{multiscale} graphs, in contrast with their well-studied single-scale counterparts. We develop multiscale analogs of paths and distance metrics. As a simple, motivating example of a common metric, we present a multiscale analog of the all-pairs shortest-path problem, along with a multiscale analog of a well-known algorithm which solves it. From a cyber defense perspective, this metric might be used to model the distance from an attacker's position in the network to a sensitive machine. In addition, we investigate probabilistic models of connectivity. These models exploit the hierarchy to quantify the likelihood that sensitive targets might be reachable from compromised nodes. We believe that our novel multiscale approach to modeling cyber-physical systems will advance several aspects of cyber defense, specifically allowing for a more efficient and agile approach to defending these systems.
Post-16 Biology--Some Model Approaches?
Lock, Roger
1997-01-01
Outlines alternative approaches to the teaching of difficult concepts in A-level biology which may help student learning by making abstract ideas more concrete and accessible. Examples include models, posters, and poems for illustrating meiosis, mitosis, genetic mutations, and protein synthesis. (DDR)
Decomposition approach to model smart suspension struts
Song, Xubin
2008-10-01
Model and simulation study is the starting point for engineering design and development, especially for developing vehicle control systems. This paper presents a methodology to build models for application of smart struts for vehicle suspension control development. The modeling approach is based on decomposition of the testing data. Per the strut functions, the data is dissected according to both control and physical variables. Then the data sets are characterized to represent different aspects of the strut working behaviors. Next different mathematical equations can be built and optimized to best fit the corresponding data sets, respectively. In this way, the model optimization can be facilitated in comparison to a traditional approach to find out a global optimum set of model parameters for a complicated nonlinear model from a series of testing data. Finally, two struts are introduced as examples for this modeling study: magneto-rheological (MR) dampers and compressible fluid (CF) based struts. The model validation shows that this methodology can truly capture macro-behaviors of these struts.
On the Necessary and Sufficient Assumptions for UC Computation
DEFF Research Database (Denmark)
Damgård, Ivan Bjerre; Nielsen, Jesper Buus; Orlandi, Claudio
2010-01-01
We study the necessary and sufficient assumptions for universally composable (UC) computation, both in terms of setup and computational assumptions. We look at the common reference string model, the uniform random string model and the key-registration authority model (KRA), and provide new results...... for all of them. Perhaps most interestingly we show that: • For even the minimal meaningful KRA, where we only assume that the secret key is a value which is hard to compute from the public key, one can UC securely compute any poly-time functionality if there exists a passive secure oblivious......-transfer protocol for the stand-alone model. Since a KRA where the secret keys can be computed from the public keys is useless, and some setup assumption is needed for UC secure computation, this establishes the best we could hope for the KRA model: any non-trivial KRA is sufficient for UC computation. • We show...
Peacebuilding: assumptions, practices and critiques
Directory of Open Access Journals (Sweden)
Cravo, Teresa Almeida
2017-05-01
Full Text Available Peacebuilding has become a guiding principle of international intervention in the periphery since its inclusion in the Agenda for Peace of the United Nations in 1992. The aim of creating the conditions for a self-sustaining peace in order to prevent a return to armed conflict is, however, far from easy or consensual. The conception of liberal peace proved particularly limited, and inevitably controversial, and the reality of war-torn societies far more complex than anticipated by international actors that today assume activities in the promotion of peace in post-conflict contexts. With a trajectory full of contested successes and some glaring failures, the current model has been the target of harsh criticism and widespread scepticism. This article critically examines the theoretical background and practicalities of peacebuilding, exploring its ambition as well as the weaknesses of the paradigm adopted by the international community since the 1990s.
Beyond GLMs: a generative mixture modeling approach to neural system identification.
Theis, Lucas; Chagas, Andrè Maia; Arnstein, Daniel; Schwarz, Cornelius; Bethge, Matthias
2013-01-01
Generalized linear models (GLMs) represent a popular choice for the probabilistic characterization of neural spike responses. While GLMs are attractive for their computational tractability, they also impose strong assumptions and thus only allow for a limited range of stimulus-response relationships to be discovered. Alternative approaches exist that make only very weak assumptions but scale poorly to high-dimensional stimulus spaces. Here we seek an approach which can gracefully interpolate between the two extremes. We extend two frequently used special cases of the GLM-a linear and a quadratic model-by assuming that the spike-triggered and non-spike-triggered distributions can be adequately represented using Gaussian mixtures. Because we derive the model from a generative perspective, its components are easy to interpret as they correspond to, for example, the spike-triggered distribution and the interspike interval distribution. The model is able to capture complex dependencies on high-dimensional stimuli with far fewer parameters than other approaches such as histogram-based methods. The added flexibility comes at the cost of a non-concave log-likelihood. We show that in practice this does not have to be an issue and the mixture-based model is able to outperform generalized linear and quadratic models.
Beyond GLMs: a generative mixture modeling approach to neural system identification.
Directory of Open Access Journals (Sweden)
Lucas Theis
Full Text Available Generalized linear models (GLMs represent a popular choice for the probabilistic characterization of neural spike responses. While GLMs are attractive for their computational tractability, they also impose strong assumptions and thus only allow for a limited range of stimulus-response relationships to be discovered. Alternative approaches exist that make only very weak assumptions but scale poorly to high-dimensional stimulus spaces. Here we seek an approach which can gracefully interpolate between the two extremes. We extend two frequently used special cases of the GLM-a linear and a quadratic model-by assuming that the spike-triggered and non-spike-triggered distributions can be adequately represented using Gaussian mixtures. Because we derive the model from a generative perspective, its components are easy to interpret as they correspond to, for example, the spike-triggered distribution and the interspike interval distribution. The model is able to capture complex dependencies on high-dimensional stimuli with far fewer parameters than other approaches such as histogram-based methods. The added flexibility comes at the cost of a non-concave log-likelihood. We show that in practice this does not have to be an issue and the mixture-based model is able to outperform generalized linear and quadratic models.
Heat transfer modeling an inductive approach
Sidebotham, George
2015-01-01
This innovative text emphasizes a "less-is-more" approach to modeling complicated systems such as heat transfer by treating them first as "1-node lumped models" that yield simple closed-form solutions. The author develops numerical techniques for students to obtain more detail, but also trains them to use the techniques only when simpler approaches fail. Covering all essential methods offered in traditional texts, but with a different order, Professor Sidebotham stresses inductive thinking and problem solving as well as a constructive understanding of modern, computer-based practice. Readers learn to develop their own code in the context of the material, rather than just how to use packaged software, offering a deeper, intrinsic grasp behind models of heat transfer. Developed from over twenty-five years of lecture notes to teach students of mechanical and chemical engineering at The Cooper Union for the Advancement of Science and Art, the book is ideal for students and practitioners across engineering discipl...
Distributed automata in an assumption-commitment framework
Indian Academy of Sciences (India)
Swarup Mohalik; R Ramanujam
2002-04-01
We propose a class of ﬁnite state systems of synchronizing distributed processes, where processes make assumptions at local states about the state of other processes in the system. This constrains the global states of the system to those where assumptions made by a process about another are compatible with the commitments offered by the other at that state. We model examples like reliable bit transmission and sequence transmission protocols in this framework and discuss how assumption-commitment structure facilitates compositional design of such protocols. We prove a decomposition theorem which states that every protocol speciﬁed globally as a ﬁnite state system can be decomposed into such an assumption compatible system. We also present a syntactic characterization of this class using top level parallel composition.
Identifying an appropriate measurement modeling approach for the Mini-Mental State Examination.
Rubright, Jonathan D; Nandakumar, Ratna; Karlawish, Jason
2016-02-01
The Mini-Mental State Examination (MMSE) is a 30-item, dichotomously scored test of general cognition. A number of benefits could be gained by modeling the MMSE in an item response theory (IRT) framework, as opposed to the currently used classical additive approach. However, the test, which is built from groups of items related to separate cognitive subdomains, may violate a key assumption of IRT: local item independence. This study aimed to identify the most appropriate measurement model for the MMSE: a unidimensional IRT model, a testlet response theory model, or a bifactor model. Local dependence analysis using nationally representative data showed a meaningful violation of the local item independence assumption, indicating multidimensionality. In addition, the testlet and bifactor models displayed superior fit indices over a unidimensional IRT model. Statistical comparisons showed that the bifactor model fit MMSE respondent data significantly better than the other models considered. These results suggest that application of a traditional unidimensional IRT model is inappropriate in this context. Instead, a bifactor model is suggested for future modeling of MMSE data as it more accurately represents the multidimensional nature of the scale. (PsycINFO Database Record
Fitts, Charles R.
1996-06-01
concentration at any given time results in a standard deviation of ∼0.12 in the statistic log(ca(max)/cm(max)) for both tests. Although the uncertainties listed above pertain to the scales of un-modeled velocity variation in these models at these sites, the reported uncertainties could serve as lower bound estimates for most deterministic model applications. Uncertainty due to the assumption of macrodispersive mixing tends to increase as the plume scale decreases or as the scale of un-modeled velocity field variations increases.
The Self in Guidance: Assumptions and Challenges.
Edwards, Richard; Payne, John
1997-01-01
Examines the assumptions of "self" made in the professional and managerial discourses of guidance. Suggests that these assumptions obstruct the capacity of guidance workers to explain their own practices. Drawing on contemporary debates over identity, modernity, and postmodernity, argues for a more explicit debate about the self in guidance. (RJM)
Challenging Teachers' Pedagogic Practice and Assumptions about Social Media
Cartner, Helen C.; Hallas, Julia L.
2017-01-01
This article describes an innovative approach to professional development designed to challenge teachers' pedagogic practice and assumptions about educational technologies such as social media. Developing effective technology-related professional development for teachers can be a challenge for institutions and facilitators who provide this…
An extended SMLD approach for presumed probability density function in flamelet combustion model
Coclite, Alessandro; De Palma, Pietro; Cutrone, Luigi
2013-01-01
This paper provides an extension of the standard flamelet progress variable (FPV) approach for turbulent combustion, applying the statistically most likely distribution (SMLD) framework to the joint PDF of the mixture fraction, Z, and the progress variable, C. In this way one does not need to make any assumption about the statistical correlation between Z and C and about the behaviour of the mixture fraction, as required in previous FPV models. In fact, for state-of-the-art models, with the assumption of very-fast-chemistry,Z is widely accepted to behave as a passive scalar characterized by a $\\beta$-distribution function. Instead, the model proposed here, evaluates the most probable joint distribution of Z and C without any assumption on their behaviour and provides an effective tool to verify the adequateness of widely used hypotheses, such as their statistical independence. The model is validated versus three well-known test cases, namely, the Sandia flames. The results are compared with those obtained by ...
DEFF Research Database (Denmark)
De Sanctis, G.; Fischer, K.; Kohler, J.
2014-01-01
Fire risk models support decision making for engineering problems under the consistent consideration of the associated uncertainties. Empirical approaches can be used for cost-benefit studies when enough data about the decision problem are available. But often the empirical approaches...... a generic risk model that is calibrated to observed fire loss data. Generic risk models assess the risk of buildings based on specific risk indicators and support risk assessment at a portfolio level. After an introduction to the principles of generic risk assessment, the focus of the present paper...... are not detailed enough. Engineering risk models, on the other hand, may be detailed but typically involve assumptions that may result in a biased risk assessment and make a cost-benefit study problematic. In two related papers it is shown how engineering and data-driven modeling can be combined by developing...
On assumption in low-altitude investigation of dayside magnetospheric phenomena
Koskinen, H. E. J.
In the physics of large-scale phenomena in complicated media, such as space plasmas, the chain of reasoning from the fundamental physics to conceptual models is a long and winding road, requiring much physical insight and reliance on various assumptions and approximations. The low-altitude investigation of dayside phenomena provides numerous examples of problems arising from the necessity to make strong assumptions. In this paper we discuss some important assumptions that are either unavoidable or at least widely used. Two examples are the concepts of frozen-in field lines and convection velocity. Instead of asking what violates the frozen-in condition, it is quite legitimate to ask what freezes the plasma and the magnetic field in the first place. Another important complex of problems are the limitations introduced by a two-dimensional approach or linearization of equations. Although modern research is more and more moving toward three-dimensional and time-dependent models, limitations in computing power often make a two-dimensional approach tempting. In a similar way, linearization makes equations analytically tractable. Finally, a very central question is the mapping. In the first approximation, the entire dayside magnetopause maps down to the ionosphere through the dayside cusp region. From the mapping viewpoint, the cusp is one of the most difficult regions and assumptions needed to perform the mapping in practice must be considered with the greatest possible care. We can never avoid assumptions but we must always make them clear to ourselves and also to the readers of our papers.
Multiscale Model Approach for Magnetization Dynamics Simulations
De Lucia, Andrea; Tretiakov, Oleg A; Kläui, Mathias
2016-01-01
Simulations of magnetization dynamics in a multiscale environment enable rapid evaluation of the Landau-Lifshitz-Gilbert equation in a mesoscopic sample with nanoscopic accuracy in areas where such accuracy is required. We have developed a multiscale magnetization dynamics simulation approach that can be applied to large systems with spin structures that vary locally on small length scales. To implement this, the conventional micromagnetic simulation framework has been expanded to include a multiscale solving routine. The software selectively simulates different regions of a ferromagnetic sample according to the spin structures located within in order to employ a suitable discretization and use either a micromagnetic or an atomistic model. To demonstrate the validity of the multiscale approach, we simulate the spin wave transmission across the regions simulated with the two different models and different discretizations. We find that the interface between the regions is fully transparent for spin waves with f...
Continuum modeling an approach through practical examples
Muntean, Adrian
2015-01-01
This book develops continuum modeling skills and approaches the topic from three sides: (1) derivation of global integral laws together with the associated local differential equations, (2) design of constitutive laws and (3) modeling boundary processes. The focus of this presentation lies on many practical examples covering aspects such as coupled flow, diffusion and reaction in porous media or microwave heating of a pizza, as well as traffic issues in bacterial colonies and energy harvesting from geothermal wells. The target audience comprises primarily graduate students in pure and applied mathematics as well as working practitioners in engineering who are faced by nonstandard rheological topics like those typically arising in the food industry.
A Multivariate Approach to Functional Neuro Modeling
DEFF Research Database (Denmark)
Mørch, Niels J.S.
1998-01-01
This Ph.D. thesis, A Multivariate Approach to Functional Neuro Modeling, deals with the analysis and modeling of data from functional neuro imaging experiments. A multivariate dataset description is provided which facilitates efficient representation of typical datasets and, more importantly...... and overall conditions governing the functional experiment, via associated micro- and macroscopic variables. The description facilitates an efficient microscopic re-representation, as well as a handle on the link between brain and behavior; the latter is achieved by hypothesizing variations in the micro...... a generalization theoretical framework centered around measures of model generalization error. - Only few, if any, examples of the application of generalization theory to functional neuro modeling currently exist in the literature. - Exemplification of the proposed generalization theoretical framework...
Interfacial Fluid Mechanics A Mathematical Modeling Approach
Ajaev, Vladimir S
2012-01-01
Interfacial Fluid Mechanics: A Mathematical Modeling Approach provides an introduction to mathematical models of viscous flow used in rapidly developing fields of microfluidics and microscale heat transfer. The basic physical effects are first introduced in the context of simple configurations and their relative importance in typical microscale applications is discussed. Then,several configurations of importance to microfluidics, most notably thin films/droplets on substrates and confined bubbles, are discussed in detail. Topics from current research on electrokinetic phenomena, liquid flow near structured solid surfaces, evaporation/condensation, and surfactant phenomena are discussed in the later chapters. This book also: Discusses mathematical models in the context of actual applications such as electrowetting Includes unique material on fluid flow near structured surfaces and phase change phenomena Shows readers how to solve modeling problems related to microscale multiphase flows Interfacial Fluid Me...
Systematic approach to MIS model creation
Directory of Open Access Journals (Sweden)
Macura Perica
2004-01-01
Full Text Available In this paper-work, by application of basic principles of general theory of system (systematic approach, we have formulated a model of marketing information system. Bases for research were basic characteristics of systematic approach and marketing system. Informational base for management of marketing system, i.e. marketing instruments was presented in a way that the most important information for decision making were listed per individual marketing mix instruments. In projected model of marketing information system, information listed in this way create a base for establishing of data bases, i.e. bases of information (data bases of: product, price, distribution, promotion. This paper-work gives basic preconditions for formulation and functioning of the model. Model was presented by explication of elements of its structure (environment, data bases operators, analysts of information system, decision makers - managers, i.e. input, process, output, feedback and relations between these elements which are necessary for its optimal functioning. Beside that, here are basic elements for implementation of the model into business system, as well as conditions for its efficient functioning and development.
Borštnik, Norma Susana Mankoč
2016-01-01
The spin-charge-family theory, which is a kind of the Kaluza-Klein theories but with fermions carrying two kinds of spins (no charges), offers the explanation for all the assumptions of the standard model, with the origin of families, the higgs and the Yukawa couplings included. It offers the explanation also for other phenomena, like the origin of the dark matter and of the matter/antimatter asymmetry in the universe. It predicts the existence of the fourth family to the observed three, as well as several scalar fields with the weak and the hyper charge of the standard model higgs ($\\pm \\frac{1}{2}, \\mp \\frac{1}{2}$, respectively), which determine the mass matrices of family members, offering an explanation, why the fourth family with the masses above $1$ TeV contributes weakly to the gluon-fusion production of the observed higgs and to its decay into two photons, and predicting that the two photons events, observed at the LHC at $\\approx 750$ GeV, might be an indication for the existence of one of several s...
Regularization of turbulence - a comprehensive modeling approach
Geurts, B. J.
2011-12-01
Turbulence readily arises in numerous flows in nature and technology. The large number of degrees of freedom of turbulence poses serious challenges to numerical approaches aimed at simulating and controlling such flows. While the Navier-Stokes equations are commonly accepted to precisely describe fluid turbulence, alternative coarsened descriptions need to be developed to cope with the wide range of length and time scales. These coarsened descriptions are known as large-eddy simulations in which one aims to capture only the primary features of a flow, at considerably reduced computational effort. Such coarsening introduces a closure problem that requires additional phenomenological modeling. A systematic approach to the closure problem, know as regularization modeling, will be reviewed. Its application to multiphase turbulent will be illustrated in which a basic regularization principle is enforced to physically consistently approximate momentum and scalar transport. Examples of Leray and LANS-alpha regularization are discussed in some detail, as are compatible numerical strategies. We illustrate regularization modeling to turbulence under the influence of rotation and buoyancy and investigate the accuracy with which particle-laden flow can be represented. A discussion of the numerical and modeling errors incurred will be given on the basis of homogeneous isotropic turbulence.
Merging Digital Surface Models Implementing Bayesian Approaches
Sadeq, H.; Drummond, J.; Li, Z.
2016-06-01
In this research different DSMs from different sources have been merged. The merging is based on a probabilistic model using a Bayesian Approach. The implemented data have been sourced from very high resolution satellite imagery sensors (e.g. WorldView-1 and Pleiades). It is deemed preferable to use a Bayesian Approach when the data obtained from the sensors are limited and it is difficult to obtain many measurements or it would be very costly, thus the problem of the lack of data can be solved by introducing a priori estimations of data. To infer the prior data, it is assumed that the roofs of the buildings are specified as smooth, and for that purpose local entropy has been implemented. In addition to the a priori estimations, GNSS RTK measurements have been collected in the field which are used as check points to assess the quality of the DSMs and to validate the merging result. The model has been applied in the West-End of Glasgow containing different kinds of buildings, such as flat roofed and hipped roofed buildings. Both quantitative and qualitative methods have been employed to validate the merged DSM. The validation results have shown that the model was successfully able to improve the quality of the DSMs and improving some characteristics such as the roof surfaces, which consequently led to better representations. In addition to that, the developed model has been compared with the well established Maximum Likelihood model and showed similar quantitative statistical results and better qualitative results. Although the proposed model has been applied on DSMs that were derived from satellite imagery, it can be applied to any other sourced DSMs.
MERGING DIGITAL SURFACE MODELS IMPLEMENTING BAYESIAN APPROACHES
Directory of Open Access Journals (Sweden)
H. Sadeq
2016-06-01
Full Text Available In this research different DSMs from different sources have been merged. The merging is based on a probabilistic model using a Bayesian Approach. The implemented data have been sourced from very high resolution satellite imagery sensors (e.g. WorldView-1 and Pleiades. It is deemed preferable to use a Bayesian Approach when the data obtained from the sensors are limited and it is difficult to obtain many measurements or it would be very costly, thus the problem of the lack of data can be solved by introducing a priori estimations of data. To infer the prior data, it is assumed that the roofs of the buildings are specified as smooth, and for that purpose local entropy has been implemented. In addition to the a priori estimations, GNSS RTK measurements have been collected in the field which are used as check points to assess the quality of the DSMs and to validate the merging result. The model has been applied in the West-End of Glasgow containing different kinds of buildings, such as flat roofed and hipped roofed buildings. Both quantitative and qualitative methods have been employed to validate the merged DSM. The validation results have shown that the model was successfully able to improve the quality of the DSMs and improving some characteristics such as the roof surfaces, which consequently led to better representations. In addition to that, the developed model has been compared with the well established Maximum Likelihood model and showed similar quantitative statistical results and better qualitative results. Although the proposed model has been applied on DSMs that were derived from satellite imagery, it can be applied to any other sourced DSMs.
A new approach for Bayesian model averaging
Institute of Scientific and Technical Information of China (English)
TIAN XiangJun; XIE ZhengHui; WANG AiHui; YANG XiaoChun
2012-01-01
Bayesian model averaging (BMA) is a recently proposed statistical method for calibrating forecast ensembles from numerical weather models.However,successful implementation of BMA requires accurate estimates of the weights and variances of the individual competing models in the ensemble.Two methods,namely the Expectation-Maximization (EM) and the Markov Chain Monte Carlo (MCMC) algorithms,are widely used for BMA model training.Both methods have their own respective strengths and weaknesses.In this paper,we first modify the BMA log-likelihood function with the aim of removing the additional limitation that requires that the BMA weights add to one,and then use a limited memory quasi-Newtonian algorithm for solving the nonlinear optimization problem,thereby formulating a new approach for BMA (referred to as BMA-BFGS).Several groups of multi-model soil moisture simulation experiments from three land surface models show that the performance of BMA-BFGS is similar to the MCMC method in terms of simulation accuracy,and that both are superior to the EM algorithm.On the other hand,the computational cost of the BMA-BFGS algorithm is substantially less than for MCMC and is almost equivalent to that for EM.
Andraud, Mathieu; Hens, Niel; Marais, Christiaan; Beutels, Philippe
2012-01-01
Dengue is a vector-borne disease recognized as the major arbovirose with four immunologically distant dengue serotypes coexisting in many endemic areas. Several mathematical models have been developed to understand the transmission dynamics of dengue, including the role of cross-reactive antibodies for the four different dengue serotypes. We aimed to review deterministic models of dengue transmission, in order to summarize the evolution of insights for, and provided by, such models, and to identify important characteristics for future model development. We identified relevant publications using PubMed and ISI Web of Knowledge, focusing on mathematical deterministic models of dengue transmission. Model assumptions were systematically extracted from each reviewed model structure, and were linked with their underlying epidemiological concepts. After defining common terms in vector-borne disease modelling, we generally categorised fourty-two published models of interest into single serotype and multiserotype models. The multi-serotype models assumed either vector-host or direct host-to-host transmission (ignoring the vector component). For each approach, we discussed the underlying structural and parameter assumptions, threshold behaviour and the projected impact of interventions. In view of the expected availability of dengue vaccines, modelling approaches will increasingly focus on the effectiveness and cost-effectiveness of vaccination options. For this purpose, the level of representation of the vector and host populations seems pivotal. Since vector-host transmission models would be required for projections of combined vaccination and vector control interventions, we advocate their use as most relevant to advice health policy in the future. The limited understanding of the factors which influence dengue transmission as well as limited data availability remain important concerns when applying dengue models to real-world decision problems.
Directory of Open Access Journals (Sweden)
Mathieu Andraud
Full Text Available Dengue is a vector-borne disease recognized as the major arbovirose with four immunologically distant dengue serotypes coexisting in many endemic areas. Several mathematical models have been developed to understand the transmission dynamics of dengue, including the role of cross-reactive antibodies for the four different dengue serotypes. We aimed to review deterministic models of dengue transmission, in order to summarize the evolution of insights for, and provided by, such models, and to identify important characteristics for future model development. We identified relevant publications using PubMed and ISI Web of Knowledge, focusing on mathematical deterministic models of dengue transmission. Model assumptions were systematically extracted from each reviewed model structure, and were linked with their underlying epidemiological concepts. After defining common terms in vector-borne disease modelling, we generally categorised fourty-two published models of interest into single serotype and multiserotype models. The multi-serotype models assumed either vector-host or direct host-to-host transmission (ignoring the vector component. For each approach, we discussed the underlying structural and parameter assumptions, threshold behaviour and the projected impact of interventions. In view of the expected availability of dengue vaccines, modelling approaches will increasingly focus on the effectiveness and cost-effectiveness of vaccination options. For this purpose, the level of representation of the vector and host populations seems pivotal. Since vector-host transmission models would be required for projections of combined vaccination and vector control interventions, we advocate their use as most relevant to advice health policy in the future. The limited understanding of the factors which influence dengue transmission as well as limited data availability remain important concerns when applying dengue models to real-world decision problems.
Energy Technology Data Exchange (ETDEWEB)
Shao, Kan, E-mail: Shao.Kan@epa.gov [ORISE Postdoctoral Fellow, National Center for Environmental Assessment, U.S. Environmental Protection Agency, Research Triangle Park, NC (United States); Gift, Jeffrey S. [National Center for Environmental Assessment, U.S. Environmental Protection Agency, Research Triangle Park, NC (United States); Setzer, R. Woodrow [National Center for Computational Toxicology, U.S. Environmental Protection Agency, Research Triangle Park, NC (United States)
2013-11-01
Continuous responses (e.g. body weight) are widely used in risk assessment for determining the benchmark dose (BMD) which is used to derive a U.S. EPA reference dose. One critical question that is not often addressed in dose–response assessments is whether to model the continuous data as normally or log-normally distributed. Additionally, if lognormality is assumed, and only summarized response data (i.e., mean ± standard deviation) are available as is usual in the peer-reviewed literature, the BMD can only be approximated. In this study, using the “hybrid” method and relative deviation approach, we first evaluate six representative continuous dose–response datasets reporting individual animal responses to investigate the impact on BMD/BMDL estimates of (1) the distribution assumption and (2) the use of summarized versus individual animal data when a log-normal distribution is assumed. We also conduct simulation studies evaluating model fits to various known distributions to investigate whether the distribution assumption has influence on BMD/BMDL estimates. Our results indicate that BMDs estimated using the hybrid method are more sensitive to the distribution assumption than counterpart BMDs estimated using the relative deviation approach. The choice of distribution assumption has limited impact on the BMD/BMDL estimates when the within dose-group variance is small, while the lognormality assumption is a better choice for relative deviation method when data are more skewed because of its appropriateness in describing the relationship between mean and standard deviation. Additionally, the results suggest that the use of summarized data versus individual response data to characterize log-normal distributions has minimal impact on BMD estimates. - Highlights: • We investigate to what extent the distribution assumption can affect BMD estimates. • Both real data analysis and simulation study are conducted. • BMDs estimated using hybrid method are more
DEFF Research Database (Denmark)
Skovbjerg, Helle Marie
2016-01-01
In this full-day workshop we want to discuss how the IDC community can make underlying assumptions, values and views regarding children and childhood in making design decisions more explicit. What assumptions do IDC designers and researchers make, and how can they be supported in reflecting...... on those assumptions and the possible influences on their design decisions? How can we make the assumptions explicit, discuss them in the IDC community and use the discussion to develop higher quality design and research? The workshop will support discussion between researchers, designers and practitioners......, and intends to share different approaches for uncovering and reflecting on values, assumptions and views about children and childhood in design....
AN AUTOMATIC APPROACH TO BOX & JENKINS MODELLING
MARCELO KRIEGER
1983-01-01
Apesar do reconhecimento amplo da qualidade das previsões obtidas na aplicação de um modelo ARIMA à previsão de séries temporais univariadas, seu uso tem permanecido restrito pela falta de procedimentos automáticos, computadorizados. Neste trabalho este problema é discutido e um algoritmo é proposto. Inspite of general recognition of the good forecasting ability of ARIMA models in predicting time series, this approach is not widely used because of the lack of ...
Modeling in transport phenomena a conceptual approach
Tosun, Ismail
2007-01-01
Modeling in Transport Phenomena, Second Edition presents and clearly explains with example problems the basic concepts and their applications to fluid flow, heat transfer, mass transfer, chemical reaction engineering and thermodynamics. A balanced approach is presented between analysis and synthesis, students will understand how to use the solution in engineering analysis. Systematic derivations of the equations and the physical significance of each term are given in detail, for students to easily understand and follow up the material. There is a strong incentive in science and engineering to
Modeling for fairness: A Rawlsian approach.
Diekmann, Sven; Zwart, Sjoerd D
2014-06-01
In this paper we introduce the overlapping design consensus for the construction of models in design and the related value judgments. The overlapping design consensus is inspired by Rawls' overlapping consensus. The overlapping design consensus is a well-informed, mutual agreement among all stakeholders based on fairness. Fairness is respected if all stakeholders' interests are given due and equal attention. For reaching such fair agreement, we apply Rawls' original position and reflective equilibrium to modeling. We argue that by striving for the original position, stakeholders expel invalid arguments, hierarchies, unwarranted beliefs, and bargaining effects from influencing the consensus. The reflective equilibrium requires that stakeholders' beliefs cohere with the final agreement and its justification. Therefore, the overlapping design consensus is not only an agreement to decisions, as most other stakeholder approaches, it is also an agreement to their justification and that this justification is consistent with each stakeholders' beliefs. For supporting fairness, we argue that fairness qualifies as a maxim in modeling. We furthermore distinguish values embedded in a model from values that are implied by its context of application. Finally, we conclude that for reaching an overlapping design consensus communication about properties of and values related to a model is required.
Schuler, Eric R; Boals, Adriel
2016-05-01
Shattered Assumptions theory (Janoff-Bulman, 1992) posits that experiencing a traumatic event has the potential to diminish the degree of optimism in the assumptions of the world (assumptive world), which could lead to the development of posttraumatic stress disorder. Prior research assessed the assumptive world with a measure that was recently reported to have poor psychometric properties (Kaler et al., 2008). The current study had 3 aims: (a) to assess the psychometric properties of a recently developed measure of the assumptive world, (b) to retrospectively examine how prior adverse events affected the optimism of the assumptive world, and (c) to measure the impact of an intervening adverse event. An 8-week prospective design with a college sample (N = 882 at Time 1 and N = 511 at Time 2) was used to assess the study objectives. We split adverse events into those that were objectively or subjectively traumatic in nature. The new measure exhibited adequate psychometric properties. The report of a prior objective or subjective trauma at Time 1 was related to a less optimistic assumptive world. Furthermore, participants who experienced an intervening objectively traumatic event evidenced a decrease in optimistic views of the world compared with those who did not experience an intervening adverse event. We found support for Shattered Assumptions theory retrospectively and prospectively using a reliable measure of the assumptive world. We discuss future assessments of the measure of the assumptive world and clinical implications to help rebuild the assumptive world with current therapies. (PsycINFO Database Record
Pedagogic process modeling: Humanistic-integrative approach
Directory of Open Access Journals (Sweden)
Boritko Nikolaj M.
2007-01-01
Full Text Available The paper deals with some current problems of modeling the dynamics of the subject-features development of the individual. The term "process" is considered in the context of the humanistic-integrative approach, in which the principles of self education are regarded as criteria for efficient pedagogic activity. Four basic characteristics of the pedagogic process are pointed out: intentionality reflects logicality and regularity of the development of the process; discreteness (stageability in dicates qualitative stages through which the pedagogic phenomenon passes; nonlinearity explains the crisis character of pedagogic processes and reveals inner factors of self-development; situationality requires a selection of pedagogic conditions in accordance with the inner factors, which would enable steering the pedagogic process. Offered are two steps for singling out a particular stage and the algorithm for developing an integrative model for it. The suggested conclusions might be of use for further theoretic research, analyses of educational practices and for realistic predicting of pedagogical phenomena. .
Nuclear level density: Shell-model approach
Sen'kov, Roman; Zelevinsky, Vladimir
2016-06-01
Knowledge of the nuclear level density is necessary for understanding various reactions, including those in the stellar environment. Usually the combinatorics of a Fermi gas plus pairing is used for finding the level density. Recently a practical algorithm avoiding diagonalization of huge matrices was developed for calculating the density of many-body nuclear energy levels with certain quantum numbers for a full shell-model Hamiltonian. The underlying physics is that of quantum chaos and intrinsic thermalization in a closed system of interacting particles. We briefly explain this algorithm and, when possible, demonstrate the agreement of the results with those derived from exact diagonalization. The resulting level density is much smoother than that coming from conventional mean-field combinatorics. We study the role of various components of residual interactions in the process of thermalization, stressing the influence of incoherent collision-like processes. The shell-model results for the traditionally used parameters are also compared with standard phenomenological approaches.
Modeling Social Annotation: a Bayesian Approach
Plangprasopchok, Anon
2008-01-01
Collaborative tagging systems, such as del.icio.us, CiteULike, and others, allow users to annotate objects, e.g., Web pages or scientific papers, with descriptive labels called tags. The social annotations, contributed by thousands of users, can potentially be used to infer categorical knowledge, classify documents or recommend new relevant information. Traditional text inference methods do not make best use of socially-generated data, since they do not take into account variations in individual users' perspectives and vocabulary. In a previous work, we introduced a simple probabilistic model that takes interests of individual annotators into account in order to find hidden topics of annotated objects. Unfortunately, our proposed approach had a number of shortcomings, including overfitting, local maxima and the requirement to specify values for some parameters. In this paper we address these shortcomings in two ways. First, we extend the model to a fully Bayesian framework. Second, we describe an infinite ver...
Tamin, Jacques
2013-08-01
In the United Kingdom (UK), ethical guidance for doctors assumes a therapeutic setting and a normal doctor-patient relationship. However, doctors with dual obligations may not always operate on the basis of these assumptions in all aspects of their role. In this paper, the situation of UK occupational physicians is described, and a set of models to characterise their different practices is proposed. The interaction between doctor and worker in each of these models is compared with the normal doctor-patient relationship, focusing on the different levels of trust required, the possible power imbalance and the fiduciary obligations that apply. This approach highlights discrepancies between what the UK General Medical Council guidance requires and what is required of a doctor in certain roles or functions. It is suggested that using this modelling approach could also help in clarifying the sources of moral conflict for other doctors with "dual obligations" in their various roles.
Sustainability assessment through analogical models: The approach of aerobic living-organism
Dassisti, Michele
2014-10-01
The most part of scientific discoveries of human being borrow ideas and inspiration from nature. This point gives the rationale of the sustainability assessment approach presented here and based on the aerobic living organism (ALO) already developed by the author, which funds on the basic assumption that it is reasonable and effective to refer to the analogy between an system organized by human (say, manufacturing system, enterprise, etc.) for several decision-making scopes. The critical review of the ALO conceptual model already developed is here discussed through an example of an Italian small enterprise manufacturing metal components for civil furniture to assess its feasibility for sustainability appraisal.
DDH-like Assumptions Based on Extension Rings
DEFF Research Database (Denmark)
Cramer, Ronald; Damgård, Ivan Bjerre; Kiltz, Eike
2011-01-01
DDH, is easy in bilinear groups. This motivates our suggestion of a different type of assumption, the d-vector DDH problems (VDDH), which are based on f(X)= X^d, but with a twist to avoid the problems with reducible polynomials. We show in the generic group model that VDDH is hard in bilinear groups...
Respondent-Driven Sampling – Testing Assumptions: Sampling with Replacement
Directory of Open Access Journals (Sweden)
Barash Vladimir D.
2016-03-01
Full Text Available Classical Respondent-Driven Sampling (RDS estimators are based on a Markov Process model in which sampling occurs with replacement. Given that respondents generally cannot be interviewed more than once, this assumption is counterfactual. We join recent work by Gile and Handcock in exploring the implications of the sampling-with-replacement assumption for bias of RDS estimators. We differ from previous studies in examining a wider range of sampling fractions and in using not only simulations but also formal proofs. One key finding is that RDS estimates are surprisingly stable even in the presence of substantial sampling fractions. Our analyses show that the sampling-with-replacement assumption is a minor contributor to bias for sampling fractions under 40%, and bias is negligible for the 20% or smaller sampling fractions typical of field applications of RDS.
Directory of Open Access Journals (Sweden)
A. Moberg
2014-06-01
Full Text Available Practical issues arise when applying a statistical framework for unbiased ranking of alternative forced climate model simulations by comparison with climate observations from instrumental and proxy data (Part 1 in this series. Given a set of model and observational data, several decisions need to be made; e.g. concerning the region that each proxy series represents, the weighting of different regions, and the time resolution to use in the analysis. Objective selection criteria cannot be made here, but we argue to study how sensitive the results are to the choices made. The framework is improved by the relaxation of two assumptions; to allow autocorrelation in the statistical model for simulated climate variability, and to enable direct comparison of alternative simulations to test if any of them fit the observations significantly better. The extended framework is applied to a set of simulations driven with forcings for the pre-industrial period 1000–1849 CE and fifteen tree-ring based temperature proxy series. Simulations run with only one external forcing (land-use, volcanic, small-amplitude solar, or large-amplitude solar, do not significantly capture the variability in the tree-ring data – although the simulation with volcanic forcing does so for some experiment settings. When all forcings are combined (using either the small- or large-amplitude solar forcing including also orbital, greenhouse-gas and non-volcanic aerosol forcing, and additionally used to produce small simulation ensembles starting from slightly different initial ocean conditions, the resulting simulations are highly capable of capturing some observed variability. Nevertheless, for some choices in the experiment design, they are not significantly closer to the observations than when unforced simulations are used, due to highly variable results between regions. It is also not possible to tell whether the small-amplitude or large-amplitude solar forcing causes the multiple
Directory of Open Access Journals (Sweden)
Thomas Heckelei
2012-05-01
Full Text Available This paper reviews and discusses the more recent literature and application of Positive Mathematical Programming in the context of agricultural supply models. Specifically, advances in the empirical foundation of parameter specifications as well as the economic rationalisation of PMP models – both criticized in earlier reviews – are investigated. Moreover, the paper provides an overview on a larger set of models with regular/repeated policy application that apply variants of PMP. Results show that most applications today avoid arbitrary parameter specifications and rely on exogenous information on supply responses to calibrate model parameters. However, only few approaches use multiple observations to estimate parameters, which is likely due to the still considerable technical challenges associated with it. Equally, we found only limited reflection on the behavioral or technological assumptions that could rationalise the PMP model structure while still keeping the model’s advantages.
The basic approach to age-structured population dynamics models, methods and numerics
Iannelli, Mimmo
2017-01-01
This book provides an introduction to age-structured population modeling which emphasises the connection between mathematical theory and underlying biological assumptions. Through the rigorous development of the linear theory and the nonlinear theory alongside numerics, the authors explore classical equations that describe the dynamics of certain ecological systems. Modeling aspects are discussed to show how relevant problems in the fields of demography, ecology, and epidemiology can be formulated and treated within the theory. In particular, the book presents extensions of age-structured modelling to the spread of diseases and epidemics while also addressing the issue of regularity of solutions, the asymptotic behaviour of solutions, and numerical approximation. With sections on transmission models, non-autonomous models and global dynamics, this book fills a gap in the literature on theoretical population dynamics. The Basic Approach to Age-Structured Population Dynamics will appeal to graduate students an...
A generalized nonlinear model-based mixed multinomial logit approach for crash data analysis.
Zeng, Ziqiang; Zhu, Wenbo; Ke, Ruimin; Ash, John; Wang, Yinhai; Xu, Jiuping; Xu, Xinxin
2017-02-01
The mixed multinomial logit (MNL) approach, which can account for unobserved heterogeneity, is a promising unordered model that has been employed in analyzing the effect of factors contributing to crash severity. However, its basic assumption of using a linear function to explore the relationship between the probability of crash severity and its contributing factors can be violated in reality. This paper develops a generalized nonlinear model-based mixed MNL approach which is capable of capturing non-monotonic relationships by developing nonlinear predictors for the contributing factors in the context of unobserved heterogeneity. The crash data on seven Interstate freeways in Washington between January 2011 and December 2014 are collected to develop the nonlinear predictors in the model. Thirteen contributing factors in terms of traffic characteristics, roadway geometric characteristics, and weather conditions are identified to have significant mixed (fixed or random) effects on the crash density in three crash severity levels: fatal, injury, and property damage only. The proposed model is compared with the standard mixed MNL model. The comparison results suggest a slight superiority of the new approach in terms of model fit measured by the Akaike Information Criterion (12.06 percent decrease) and Bayesian Information Criterion (9.11 percent decrease). The predicted crash densities for all three levels of crash severities of the new approach are also closer (on average) to the observations than the ones predicted by the standard mixed MNL model. Finally, the significance and impacts of the contributing factors are analyzed.
Evaluating face trustworthiness: a model based approach.
Todorov, Alexander; Baron, Sean G; Oosterhof, Nikolaas N
2008-06-01
Judgments of trustworthiness from faces determine basic approach/avoidance responses and approximate the valence evaluation of faces that runs across multiple person judgments. Here, based on trustworthiness judgments and using a computer model for face representation, we built a model for representing face trustworthiness (study 1). Using this model, we generated novel faces with an increased range of trustworthiness and used these faces as stimuli in a functional Magnetic Resonance Imaging study (study 2). Although participants did not engage in explicit evaluation of the faces, the amygdala response changed as a function of face trustworthiness. An area in the right amygdala showed a negative linear response-as the untrustworthiness of faces increased so did the amygdala response. Areas in the left and right putamen, the latter area extended into the anterior insula, showed a similar negative linear response. The response in the left amygdala was quadratic--strongest for faces on both extremes of the trustworthiness dimension. The medial prefrontal cortex and precuneus also showed a quadratic response, but their response was strongest to faces in the middle range of the trustworthiness dimension.
Xu, T.; Valocchi, A. J.
2014-12-01
Effective water resource management typically relies on numerical models to analyse groundwater flow and solute transport processes. These models are usually subject to model structure error due to simplification and/or misrepresentation of the real system. As a result, the model outputs may systematically deviate from measurements, thus violating a key assumption for traditional regression-based calibration and uncertainty analysis. On the other hand, model structure error induced bias can be described statistically in an inductive, data-driven way based on historical model-to-measurement misfit. We adopt a fully Bayesian approach that integrates a Gaussian process error model to account for model structure error to the calibration, prediction and uncertainty analysis of groundwater models. The posterior distributions of parameters of the groundwater model and the Gaussian process error model are jointly inferred using DREAM, an efficient Markov chain Monte Carlo sampler. We test the usefulness of the fully Bayesian approach towards a synthetic case study of surface-ground water interaction under changing pumping conditions. We first illustrate through this example that traditional least squares regression without accounting for model structure error yields biased parameter estimates due to parameter compensation as well as biased predictions. In contrast, the Bayesian approach gives less biased parameter estimates. Moreover, the integration of a Gaussian process error model significantly reduces predictive bias and leads to prediction intervals that are more consistent with observations. The results highlight the importance of explicit treatment of model structure error especially in circumstances where subsequent decision-making and risk analysis require accurate prediction and uncertainty quantification. In addition, the data-driven error modelling approach is capable of extracting more information from observation data than using a groundwater model alone.
Approaches and models of intercultural education
Directory of Open Access Journals (Sweden)
Iván Manuel Sánchez Fontalvo
2013-10-01
Full Text Available Needed to be aware of the need to build an intercultural society, awareness must be assumed in all social spheres, where stands the role play education. A role of transcendental, since it must promote educational spaces to form people with virtues and powers that allow them to live together / as in multicultural contexts and social diversities (sometimes uneven in an increasingly globalized and interconnected world, and foster the development of feelings of civic belonging shared before the neighborhood, city, region and country, allowing them concern and critical judgement to marginalization, poverty, misery and inequitable distribution of wealth, causes of structural violence, but at the same time, wanting to work for the welfare and transformation of these scenarios. Since these budgets, it is important to know the approaches and models of intercultural education that have been developed so far, analysing their impact on the contexts educational where apply.
Ernst, Anja F; Albers, Casper J
2017-01-01
Misconceptions about the assumptions behind the standard linear regression model are widespread and dangerous. These lead to using linear regression when inappropriate, and to employing alternative procedures with less statistical power when unnecessary. Our systematic literature review investigated employment and reporting of assumption checks in twelve clinical psychology journals. Findings indicate that normality of the variables themselves, rather than of the errors, was wrongfully held for a necessary assumption in 4% of papers that use regression. Furthermore, 92% of all papers using linear regression were unclear about their assumption checks, violating APA-recommendations. This paper appeals for a heightened awareness for and increased transparency in the reporting of statistical assumption checking.
Modeling of integral-type Stirling refrigerator using system dynamics approach
Energy Technology Data Exchange (ETDEWEB)
Huang, B.J.; Chen, H.Y. [National Taiwan Univ., Dept. of Mechanical Engineering, Taipei (Taiwan)
2000-07-01
A modeling of the integral-type Stirling refrigerator (ITSR) using a system dynamics approach is developed in the present study. The system dynamics models are derived to describe the input/output relation of each component of an ITSR. Connecting the equivalent circuits of the components together, we obtain a flow network diagram and derive a transfer function to represent system dynamics behavior of an ITSR. The performance analysis of an ITSR is then easily carried out by use of the transfer function with sinusoidal signal assumption. The performance calculation for a test refrigerator has shown that the analysis is accurate. An empirical correlation for a correction coefficient r in the expansion space was also derived for better results. The present study has demonstrated how a linear system theory or system dynamics approach can be applied to the design of an ITSR. (Author)
Causal Mediation Analysis: Warning! Assumptions Ahead
Keele, Luke
2015-01-01
In policy evaluations, interest may focus on why a particular treatment works. One tool for understanding why treatments work is causal mediation analysis. In this essay, I focus on the assumptions needed to estimate mediation effects. I show that there is no "gold standard" method for the identification of causal mediation effects. In…
Mexican-American Cultural Assumptions and Implications.
Carranza, E. Lou
The search for presuppositions of a people's thought is not new. Octavio Paz and Samuel Ramos have both attempted to describe the assumptions underlying the Mexican character. Paz described Mexicans as private, defensive, and stoic, characteristics taken to the extreme in the "pachuco." Ramos, on the other hand, described Mexicans as…
The homogeneous marginal utility of income assumption
Demuynck, T.
2015-01-01
We develop a test to verify if every agent from a population of heterogeneous consumers has the same marginal utility of income function. This homogeneous marginal utility of income assumption is often (implicitly) used in applied demand studies because it has nice aggregation properties and facilit
Mexican-American Cultural Assumptions and Implications.
Carranza, E. Lou
The search for presuppositions of a people's thought is not new. Octavio Paz and Samuel Ramos have both attempted to describe the assumptions underlying the Mexican character. Paz described Mexicans as private, defensive, and stoic, characteristics taken to the extreme in the "pachuco." Ramos, on the other hand, described Mexicans as…
Culturally Biased Assumptions in Counseling Psychology
Pedersen, Paul B.
2003-01-01
Eight clusters of culturally biased assumptions are identified for further discussion from Leong and Ponterotto's (2003) article. The presence of cultural bias demonstrates that cultural bias is so robust and pervasive that is permeates the profession of counseling psychology, even including those articles that effectively attack cultural bias…
Extracurricular Business Planning Competitions: Challenging the Assumptions
Watson, Kayleigh; McGowan, Pauric; Smith, Paul
2014-01-01
Business planning competitions [BPCs] are a commonly offered yet under-examined extracurricular activity. Given the extent of sceptical comment about business planning, this paper offers what the authors believe is a much-needed critical discussion of the assumptions that underpin the provision of such competitions. In doing so it is suggested…
A Bayesian modeling approach for generalized semiparametric structural equation models.
Song, Xin-Yuan; Lu, Zhao-Hua; Cai, Jing-Heng; Ip, Edward Hak-Sing
2013-10-01
In behavioral, biomedical, and psychological studies, structural equation models (SEMs) have been widely used for assessing relationships between latent variables. Regression-type structural models based on parametric functions are often used for such purposes. In many applications, however, parametric SEMs are not adequate to capture subtle patterns in the functions over the entire range of the predictor variable. A different but equally important limitation of traditional parametric SEMs is that they are not designed to handle mixed data types-continuous, count, ordered, and unordered categorical. This paper develops a generalized semiparametric SEM that is able to handle mixed data types and to simultaneously model different functional relationships among latent variables. A structural equation of the proposed SEM is formulated using a series of unspecified smooth functions. The Bayesian P-splines approach and Markov chain Monte Carlo methods are developed to estimate the smooth functions and the unknown parameters. Moreover, we examine the relative benefits of semiparametric modeling over parametric modeling using a Bayesian model-comparison statistic, called the complete deviance information criterion (DIC). The performance of the developed methodology is evaluated using a simulation study. To illustrate the method, we used a data set derived from the National Longitudinal Survey of Youth.
Onisko, Agnieszka; Druzdzel, Marek J.; Austin, R. Marshall
2016-01-01
Background: Classical statistics is a well-established approach in the analysis of medical data. While the medical community seems to be familiar with the concept of a statistical analysis and its interpretation, the Bayesian approach, argued by many of its proponents to be superior to the classical frequentist approach, is still not well-recognized in the analysis of medical data. Aim: The goal of this study is to encourage data analysts to use the Bayesian approach, such as modeling with graphical probabilistic networks, as an insightful alternative to classical statistical analysis of medical data. Materials and Methods: This paper offers a comparison of two approaches to analysis of medical time series data: (1) classical statistical approach, such as the Kaplan–Meier estimator and the Cox proportional hazards regression model, and (2) dynamic Bayesian network modeling. Our comparison is based on time series cervical cancer screening data collected at Magee-Womens Hospital, University of Pittsburgh Medical Center over 10 years. Results: The main outcomes of our comparison are cervical cancer risk assessments produced by the three approaches. However, our analysis discusses also several aspects of the comparison, such as modeling assumptions, model building, dealing with incomplete data, individualized risk assessment, results interpretation, and model validation. Conclusion: Our study shows that the Bayesian approach is (1) much more flexible in terms of modeling effort, and (2) it offers an individualized risk assessment, which is more cumbersome for classical statistical approaches. PMID:28163973
An integrated approach to permeability modeling using micro-models
Energy Technology Data Exchange (ETDEWEB)
Hosseini, A.H.; Leuangthong, O.; Deutsch, C.V. [Society of Petroleum Engineers, Canadian Section, Calgary, AB (Canada)]|[Alberta Univ., Edmonton, AB (Canada)
2008-10-15
An important factor in predicting the performance of steam assisted gravity drainage (SAGD) well pairs is the spatial distribution of permeability. Complications that make the inference of a reliable porosity-permeability relationship impossible include the presence of short-scale variability in sand/shale sequences; preferential sampling of core data; and uncertainty in upscaling parameters. Micro-modelling is a simple and effective method for overcoming these complications. This paper proposed a micro-modeling approach to account for sampling bias, small laminated features with high permeability contrast, and uncertainty in upscaling parameters. The paper described the steps and challenges of micro-modeling and discussed the construction of binary mixture geo-blocks; flow simulation and upscaling; extended power law formalism (EPLF); and the application of micro-modeling and EPLF. An extended power-law formalism to account for changes in clean sand permeability as a function of macroscopic shale content was also proposed and tested against flow simulation results. There was close agreement between the model and simulation results. The proposed methodology was also applied to build the porosity-permeability relationship for laminated and brecciated facies of McMurray oil sands. Experimental data was in good agreement with the experimental data. 8 refs., 17 figs.
Quantum cryptography in real-life applications: Assumptions and security
Zhao, Yi
Quantum cryptography, or quantum key distribution (QKD), provides a means of unconditionally secure communication. The security is in principle based on the fundamental laws of physics. Security proofs show that if quantum cryptography is appropriately implemented, even the most powerful eavesdropper cannot decrypt the message from a cipher. The implementations of quantum crypto-systems in real life may not fully comply with the assumptions made in the security proofs. Such discrepancy between the experiment and the theory can be fatal to the security of a QKD system. In this thesis we address a number of these discrepancies. A perfect single-photon source is often assumed in many security proofs. However, a weak coherent source is widely used in a real-life QKD implementation. Decoy state protocols have been proposed as a novel approach to dramatically improve the performance of a weak coherent source based QKD implementation without jeopardizing its security. Here, we present the first experimental demonstrations of decoy state protocols. Our experimental scheme was later adopted by most decoy state QKD implementations. In the security proof of decoy state protocols as well as many other QKD protocols, it is widely assumed that a sender generates a phase-randomized coherent state. This assumption has been enforced in few implementations. We close this gap in two steps: First, we implement and verify the phase randomization experimentally; second, we prove the security of a QKD implementation without the coherent state assumption. In many security proofs of QKD, it is assumed that all the detectors on the receiver's side have identical detection efficiencies. We show experimentally that this assumption may be violated in a commercial QKD implementation due to an eavesdropper's malicious manipulation. Moreover, we show that the eavesdropper can learn part of the final key shared by the legitimate users as a consequence of this violation of the assumptions.
Development of a computationally efficient urban modeling approach
DEFF Research Database (Denmark)
Wolfs, Vincent; Murla, Damian; Ntegeka, Victor
2016-01-01
This paper presents a parsimonious and data-driven modelling approach to simulate urban floods. Flood levels simulated by detailed 1D-2D hydrodynamic models can be emulated using the presented conceptual modelling approach with a very short calculation time. In addition, the model detail can be a...
A phasor approach analysis of multiphoton FLIM measurements of three-dimensional cell culture models
Lakner, P. H.; Möller, Y.; Olayioye, M. A.; Brucker, S. Y.; Schenke-Layland, K.; Monaghan, M. G.
2016-03-01
Fluorescence lifetime imaging microscopy (FLIM) is a useful approach to obtain information regarding the endogenous fluorophores present in biological samples. The concise evaluation of FLIM data requires the use of robust mathematical algorithms. In this study, we developed a user-friendly phasor approach for analyzing FLIM data and applied this method on three-dimensional (3D) Caco-2 models of polarized epithelial luminal cysts in a supporting extracellular matrix environment. These Caco-2 based models were treated with epidermal growth factor (EGF), to stimulate proliferation in order to determine if FLIM could detect such a change in cell behavior. Autofluorescence from nicotinamide adenine dinucleotide (phosphate) (NAD(P)H) in luminal Caco-2 cysts was stimulated by 2-photon laser excitation. Using a phasor approach, the lifetimes of involved fluorophores and their contribution were calculated with fewer initial assumptions when compared to multiexponential decay fitting. The phasor approach simplified FLIM data analysis, making it an interesting tool for non-experts in numerical data analysis. We observed that an increased proliferation stimulated by EGF led to a significant shift in fluorescence lifetime and a significant alteration of the phasor data shape. Our data demonstrates that multiphoton FLIM analysis with the phasor approach is a suitable method for the non-invasive analysis of 3D in vitro cell culture models qualifying this method for monitoring basic cellular features and the effect of external factors.
Medina, Tait Runnfeldt
The increasing global reach of survey research provides sociologists with new opportunities to pursue theory building and refinement through comparative analysis. However, comparison across a broad array of diverse contexts introduces methodological complexities related to the development of constructs (i.e., measurement modeling) that if not adequately recognized and properly addressed undermine the quality of research findings and cast doubt on the validity of substantive conclusions. The motivation for this dissertation arises from a concern that the availability of cross-national survey data has outpaced sociologists' ability to appropriately analyze and draw meaningful conclusions from such data. I examine the implicit assumptions and detail the limitations of three commonly used measurement models in cross-national analysis---summative scale, pooled factor model, and multiple-group factor model with measurement invariance. Using the orienting lens of the double tension I argue that a new approach to measurement modeling that incorporates important cross-national differences into the measurement process is needed. Two such measurement models---multiple-group factor model with partial measurement invariance (Byrne, Shavelson and Muthen 1989) and the alignment method (Asparouhov and Muthen 2014; Muthen and Asparouhov 2014)---are discussed in detail and illustrated using a sociologically relevant substantive example. I demonstrate that the former approach is vulnerable to an identification problem that arbitrarily impacts substantive conclusions. I conclude that the alignment method is built on model assumptions that are consistent with theoretical understandings of cross-national comparability and provides an approach to measurement modeling and construct development that is uniquely suited for cross-national research. The dissertation makes three major contributions: First, it provides theoretical justification for a new cross-national measurement model and
2010-01-01
The paper summarises the results of a study of the effect of piping inspection reliability assumptions on failure probability using structural reliability models. The main interest was to investigate whether it is justifiable to use a simplified probability of detection (POD) curve. Further, the study compared various structural reliability calculation approaches for a set of cases. The results indicate that the use of a simplified POD could be justifiable in RI-ISI applications.
Utilizing Soize's Approach to Identify Parameter and Model Uncertainties
Energy Technology Data Exchange (ETDEWEB)
Bonney, Matthew S. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Univ. of Wisconsin, Madison, WI (United States); Brake, Matthew Robert [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2014-10-01
Quantifying uncertainty in model parameters is a challenging task for analysts. Soize has derived a method that is able to characterize both model and parameter uncertainty independently. This method is explained with the assumption that some experimental data is available, and is divided into seven steps. Monte Carlo analyses are performed to select the optimal dispersion variable to match the experimental data. Along with the nominal approach, an alternative distribution can be used along with corrections that can be utilized to expand the scope of this method. This method is one of a very few methods that can quantify uncertainty in the model form independently of the input parameters. Two examples are provided to illustrate the methodology, and example code is provided in the Appendix.
The OPERA hypothesis: assumptions and clarifications.
Patel, Aniruddh D
2012-04-01
Recent research suggests that musical training enhances the neural encoding of speech. Why would musical training have this effect? The OPERA hypothesis proposes an answer on the basis of the idea that musical training demands greater precision in certain aspects of auditory processing than does ordinary speech perception. This paper presents two assumptions underlying this idea, as well as two clarifications, and suggests directions for future research.
On distributional assumptions and whitened cosine similarities
DEFF Research Database (Denmark)
Loog, Marco
2008-01-01
Recently, an interpretation of the whitened cosine similarity measure as a Bayes decision rule was proposed (C. Liu, "The Bayes Decision Rule Induced Similarity Measures,'' IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 29, no. 6, pp. 1086-1090, June 2007. This communication makes th...... the observation that some of the distributional assumptions made to derive this measure are very restrictive and, considered simultaneously, even inconsistent....
How to Handle Assumptions in Synthesis
Directory of Open Access Journals (Sweden)
Roderick Bloem
2014-07-01
Full Text Available The increased interest in reactive synthesis over the last decade has led to many improved solutions but also to many new questions. In this paper, we discuss the question of how to deal with assumptions on environment behavior. We present four goals that we think should be met and review several different possibilities that have been proposed. We argue that each of them falls short in at least one aspect.
Connectivity of channelized reservoirs: a modelling approach
Energy Technology Data Exchange (ETDEWEB)
Larue, David K. [ChevronTexaco, Bakersfield, CA (United States); Hovadik, Joseph [ChevronTexaco, San Ramon, CA (United States)
2006-07-01
Connectivity represents one of the fundamental properties of a reservoir that directly affects recovery. If a portion of the reservoir is not connected to a well, it cannot be drained. Geobody or sandbody connectivity is defined as the percentage of the reservoir that is connected, and reservoir connectivity is defined as the percentage of the reservoir that is connected to wells. Previous studies have mostly considered mathematical, physical and engineering aspects of connectivity. In the current study, the stratigraphy of connectivity is characterized using simple, 3D geostatistical models. Based on these modelling studies, stratigraphic connectivity is good, usually greater than 90%, if the net: gross ratio, or sand fraction, is greater than about 30%. At net: gross values less than 30%, there is a rapid diminishment of connectivity as a function of net: gross. This behaviour between net: gross and connectivity defines a characteristic 'S-curve', in which the connectivity is high for net: gross values above 30%, then diminishes rapidly and approaches 0. Well configuration factors that can influence reservoir connectivity are well density, well orientation (vertical or horizontal; horizontal parallel to channels or perpendicular) and length of completion zones. Reservoir connectivity as a function of net: gross can be improved by several factors: presence of overbank sandy facies, deposition of channels in a channel belt, deposition of channels with high width/thickness ratios, and deposition of channels during variable floodplain aggradation rates. Connectivity can be reduced substantially in two-dimensional reservoirs, in map view or in cross-section, by volume support effects and by stratigraphic heterogeneities. It is well known that in two dimensions, the cascade zone for the 'S-curve' of net: gross plotted against connectivity occurs at about 60% net: gross. Generalizing this knowledge, any time that a reservoir can be regarded as &apos
Comparison of six statistical approaches in the selection of appropriate fish growth models
Institute of Scientific and Technical Information of China (English)
ZHU Lixin; LI Lifang; LIANG Zhenlin
2009-01-01
The performance of six statistical approaches, which can be used for selection of the best model to describe the growth of individual fish, was analyzed using simulated and real length-at-age data. The six approaches include coefficient of determination (R2), adjusted coefficient of determination (adj.-R2), root mean squared error (RMSE), Akaike's information criterion (AIC), bias correction of AIC (AICc) and Bayesian information criterion (BIC). The simulation data were generated by five growth models with different numbers of parameters. Four sets of real data were taken from the literature. The parameters in each of the five growth models were estimated using the maximum likelihood method under the assumption of the additive error structure for the data. The best supported model by the data was identified using each of the six approaches. The results show that R2 and RMSE have the same properties and perform worst. The sample size has an effect on the performance of adj.-R2, AIC, AICc and BIC. Adj.-R2 does better in small samples than in large samples. AIC is not suitable to use in small samples and tends to select more complex model when the sample size becomes large. AICc and BIC have best performance in small and large sample cases, respectively. Use of AICc or BIC is recommended for selection of fish growth model according to the size of the length-at-age data.
Galvanin, Federico; Ballan, Carlo C; Barolo, Massimiliano; Bezzo, Fabrizio
2013-08-01
The use of pharmacokinetic (PK) and pharmacodynamic (PD) models is a common and widespread practice in the preliminary stages of drug development. However, PK-PD models may be affected by structural identifiability issues intrinsically related to their mathematical formulation. A preliminary structural identifiability analysis is usually carried out to check if the set of model parameters can be uniquely determined from experimental observations under the ideal assumptions of noise-free data and no model uncertainty. However, even for structurally identifiable models, real-life experimental conditions and model uncertainty may strongly affect the practical possibility to estimate the model parameters in a statistically sound way. A systematic procedure coupling the numerical assessment of structural identifiability with advanced model-based design of experiments formulations is presented in this paper. The objective is to propose a general approach to design experiments in an optimal way, detecting a proper set of experimental settings that ensure the practical identifiability of PK-PD models. Two simulated case studies based on in vitro bacterial growth and killing models are presented to demonstrate the applicability and generality of the methodology to tackle model identifiability issues effectively, through the design of feasible and highly informative experiments.
A Hydrological Model To Bridge The Gap Between Conceptual and Physically Based Approaches
Lempert, M.; Ostrowski, M.; Blöschl, G.
In the last decade it has become evident that models are needed to account for more realistic physical assumptions and for improved data availability and computational facilities. In general it seems to be a dominant objective to better account for nonlin- earity and for less uncertain parameter identification. This allows its application also to ungaged catchments. To account for these objectives and for improved computa- tional boundary conditions a new model has been developed, tested and validated at Darmstadt University of Technology. The model is a quasi non linear model, it uses GIS provided data and includes physically based (not physical) model parameters, quite readily available from digitally stored information. Surface runoff determined after physically based non linear soil moisture modelling is routed with the kinematic cascade approach according to digital elevation grid models while sub-surface flow is routed through linear conceptual modules. The model uses generally accepted param- eters for soil moisture modelling including vegetation canopy such as total porosity, field cvapacity, wilting point, hydraulic conductivities and leaf area index and canopy coverage. The model has been successfully applied to several test sites and catchments at local, micro and lower macro scales. It is the objective of the paper to - explain the background of model development - briefly explain algorithms - discuss model parameter identification - present case study results
Combining FDI and AI approaches within causal-model-based diagnosis.
Gentil, Sylviane; Montmain, Jacky; Combastel, Christophe
2004-10-01
This paper presents a model-based diagnostic method designed in the context of process supervision. It has been inspired by both artificial intelligence and control theory. AI contributes tools for qualitative modeling, including causal modeling, whose aim is to split a complex process into elementary submodels. Control theory, within the framework of fault detection and isolation (FDI), provides numerical models for generating and testing residuals, and for taking into account inaccuracies in the model, unknown disturbances and noise. Consistency-based reasoning provides a logical foundation for diagnostic reasoning and clarifies fundamental assumptions, such as single fault and exoneration. The diagnostic method presented in the paper benefits from the advantages of all these approaches. Causal modeling enables the method to focus on sufficient relations for fault isolation, which avoids combinatorial explosion. Moreover, it allows the model to be modified easily without changing any aspect of the diagnostic algorithm. The numerical submodels that are used to detect inconsistency benefit from the precise quantitative analysis of the FDI approach. The FDI models are studied in order to link this method with DX component-oriented reasoning. The recursive on-line use of this algorithm is explained and the concept of local exoneration is introduced.
Jacquin, Hugo; Shakhnovich, Eugene; Cocco, Simona; Monasson, Rémi
2016-01-01
Inverse statistical approaches to determine protein structure and function from Multiple Sequence Alignments (MSA) are emerging as powerful tools in computational biology. However the underlying assumptions of the relationship between the inferred effective Potts Hamiltonian and real protein structure and energetics remain untested so far. Here we use lattice protein model (LP) to benchmark those inverse statistical approaches. We build MSA of highly stable sequences in target LP structures, and infer the effective pairwise Potts Hamiltonians from those MSA. We find that inferred Potts Hamiltonians reproduce many important aspects of 'true' LP structures and energetics. Careful analysis reveals that effective pairwise couplings in inferred Potts Hamiltonians depend not only on the energetics of the native structure but also on competing folds; in particular, the coupling values reflect both positive design (stabilization of native conformation) and negative design (destabilization of competing folds). In addi...
Directory of Open Access Journals (Sweden)
Kelly M Goedert
2013-05-01
Full Text Available Valid research on neglect rehabilitation demands a statistical approach commensurate with the characteristics of neglect rehabilitation data: Neglect arises from impairment in distinct brain networks leading to large between-subject variability in baseline symptoms and recovery trajectories. Studies enrolling medically-ill, disabled patients, may suffer from missing, unbalanced data, and small sample sizes. Finally, assessment of rehabilitation requires a description of continuous recovery trajectories. Unfortunately, the statistical method currently employed in most studies of neglect treatment (repeated-measures ANOVA does not well-address these issues. Here we review an alternative, mixed linear modeling (MLM, that is more appropriate for assessing change over time. MLM better accounts for between-subject heterogeneity in baseline neglect severity and in recovery trajectory. MLM does not require complete or balanced data, nor does it make strict assumptions regarding the data structure. Furthermore, because MLM better models between-subject heterogeneity it often results in increased power to observe treatment effects with smaller samples. After reviewing current practices in the field, and the assumptions of repeated-measures ANOVA, we provide an introduction to MLM. We review its assumptions, uses, advantages and disadvantages. Using real and simulated data, we illustrate how MLM may improve the ability to detect effects of treatment over ANOVA, particularly with the small samples typical of neglect research. Furthermore, our simulation analyses result in recommendations for the design of future rehabilitation studies. Because between-subject heterogeneity is one important reason why studies of neglect treatments often yield conflicting results, employing statistical procedures that model this heterogeneity more accurately will increase the efficiency of our efforts to find treatments to improve the lives of individuals with neglect.
Directory of Open Access Journals (Sweden)
Hugo Jacquin
2016-05-01
Full Text Available Inverse statistical approaches to determine protein structure and function from Multiple Sequence Alignments (MSA are emerging as powerful tools in computational biology. However the underlying assumptions of the relationship between the inferred effective Potts Hamiltonian and real protein structure and energetics remain untested so far. Here we use lattice protein model (LP to benchmark those inverse statistical approaches. We build MSA of highly stable sequences in target LP structures, and infer the effective pairwise Potts Hamiltonians from those MSA. We find that inferred Potts Hamiltonians reproduce many important aspects of 'true' LP structures and energetics. Careful analysis reveals that effective pairwise couplings in inferred Potts Hamiltonians depend not only on the energetics of the native structure but also on competing folds; in particular, the coupling values reflect both positive design (stabilization of native conformation and negative design (destabilization of competing folds. In addition to providing detailed structural information, the inferred Potts models used as protein Hamiltonian for design of new sequences are able to generate with high probability completely new sequences with the desired folds, which is not possible using independent-site models. Those are remarkable results as the effective LP Hamiltonians used to generate MSA are not simple pairwise models due to the competition between the folds. Our findings elucidate the reasons for the success of inverse approaches to the modelling of proteins from sequence data, and their limitations.
Institute of Scientific and Technical Information of China (English)
DONG Ming
2008-01-01
As a new maintenance method, CBM (condition based maintenance) is becoming more and more important for the health management of complicated and costly equipment. A prerequisite to widespread deployment of CBM technology and prac-tice in industry is effective diagnostics and prognostics. Recently, a pattern recog-nition technique called HMM (hidden Markov model) was widely used in many fields. However, due to some unrealistic assumptions, diagnositic results from HMM were not so good, and it was difficult to use HMM directly for prognosis. By relaxing the unrealistic assumptions in HMM, this paper presents a novel approach to equip-ment health management based on auto-regressive hidden semi-Markov model (AR-HSMM). Compared with HMM, AR-HSMM has three advantages: 1)It allows explicitly modeling the time duration of the hidden states and therefore is capable of prognosis. 2) It can relax observations' independence assumption by accom-modating a link between consecutive observations. 3) It does not follow the unre-alistic Markov chain's memoryless assumption and therefore provides more pow-erful modeling and analysis capability for real problems. To facilitate the computation in the proposed AR-HSMM-based diagnostics and prognostics, new forwardbackward variables are defined and a modified forward-backward algorithm is developed. The evaluation of the proposed methodology was carried out through a real world application case study: health diagnosis and prognosis of hydraulic pumps in Caterpillar Inc. The testing results show that the proposed new approach based on AR-HSMM is effective and can provide useful support for the decision-making in equipment health management.
van Maanen, Barend; Nicholls, Robert J.; French, Jon R.; Barkwith, Andrew; Bonaldo, Davide; Burningham, Helene; Brad Murray, A.; Payo, Andres; Sutherland, James; Thornhill, Gillian; Townend, Ian H.; van der Wegen, Mick; Walkden, Mike J. A.
2016-03-01
Coastal and shoreline management increasingly needs to consider morphological change occurring at decadal to centennial timescales, especially that related to climate change and sea-level rise. This requires the development of morphological models operating at a mesoscale, defined by time and length scales of the order 101 to 102 years and 101 to 102 km. So-called 'reduced complexity' models that represent critical processes at scales not much smaller than the primary scale of interest, and are regulated by capturing the critical feedbacks that govern landform behaviour, are proving effective as a means of exploring emergent coastal behaviour at a landscape scale. Such models tend to be computationally efficient and are thus easily applied within a probabilistic framework. At the same time, reductionist models, built upon a more detailed description of hydrodynamic and sediment transport processes, are capable of application at increasingly broad spatial and temporal scales. More qualitative modelling approaches are also emerging that can guide the development and deployment of quantitative models, and these can be supplemented by varied data-driven modelling approaches that can achieve new explanatory insights from observational datasets. Such disparate approaches have hitherto been pursued largely in isolation by mutually exclusive modelling communities. Brought together, they have the potential to facilitate a step change in our ability to simulate the evolution of coastal morphology at scales that are most relevant to managing erosion and flood risk. Here, we advocate and outline a new integrated modelling framework that deploys coupled mesoscale reduced complexity models, reductionist coastal area models, data-driven approaches, and qualitative conceptual models. Integration of these heterogeneous approaches gives rise to model compositions that can potentially resolve decadal- to centennial-scale behaviour of diverse coupled open coast, estuary and inner
Relaxing the zero-sum assumption in neutral biodiversity theory.
Haegeman, Bart; Etienne, Rampal S
2008-05-21
The zero-sum assumption is one of the ingredients of the standard neutral model of biodiversity by Hubbell. It states that the community is saturated all the time, which in this model means that the total number of individuals in the community is constant over time, and therefore introduces a coupling between species abundances. It was shown recently that a neutral model with independent species, and thus without any coupling between species abundances, has the same sampling formula (given a fixed number of individuals in the sample) as the standard model [Etienne, R.S., Alonso, D., McKane, A.J., 2007. The zero-sum assumption in neutral biodiversity theory. J. Theor. Biol. 248, 522-536]. The equilibria of both models are therefore equivalent from a practical point of view. Here we show that this equivalence can be extended to a class of neutral models with density-dependence on the community-level. This result can be interpreted as robustness of the model, i.e. insensitivity of the model to the precise interaction of the species in a neutral community. It can also be interpreted as a lack of resolution, as different mechanisms of interactions between neutral species cannot be distinguished using only a single snapshot of species abundance data.
Uncertainty in biology a computational modeling approach
Gomez-Cabrero, David
2016-01-01
Computational modeling of biomedical processes is gaining more and more weight in the current research into the etiology of biomedical problems and potential treatment strategies. Computational modeling allows to reduce, refine and replace animal experimentation as well as to translate findings obtained in these experiments to the human background. However these biomedical problems are inherently complex with a myriad of influencing factors, which strongly complicates the model building and validation process. This book wants to address four main issues related to the building and validation of computational models of biomedical processes: Modeling establishment under uncertainty Model selection and parameter fitting Sensitivity analysis and model adaptation Model predictions under uncertainty In each of the abovementioned areas, the book discusses a number of key-techniques by means of a general theoretical description followed by one or more practical examples. This book is intended for graduate stude...
Evaluating The Markov Assumption For Web Usage Mining
DEFF Research Database (Denmark)
Jespersen, S.; Pedersen, Torben Bach; Thorhauge, J.
2003-01-01
Web usage mining concerns the discovery of common browsing patterns, i.e., pages requested in sequence, from web logs. To cope with the enormous amounts of data, several aggregated structures based on statistical models of web surfing have appeared, e.g., the Hypertext Probabilistic Grammar (HPG...... knowledge there has been no systematic study of the validity of the Markov assumption wrt.\\ web usage mining and the resulting quality of the mined browsing patterns. In this paper we systematically investigate the quality of browsing patterns mined from structures based on the Markov assumption. Formal...... measures of quality, based on the closeness of the mined patterns to the true traversal patterns, are defined and an extensive experimental evaluation is performed, based on two substantial real-world data sets. The results indicate that a large number of rules must be considered to achieve high quality...
ALREST High Fidelity Modeling Program Approach
2011-05-18
Gases and Mixtures of Redlich - Kwong and Peng- Robinson Fluids Assumed pdf Model based on k- ε-g Model in NASA/LaRc Vulcan code Level Set model...Potential Attractiveness Of Liquid Hydrocarbon Engines For Boost Applications • Propensity Of Hydrocarbon Engines For Combustion Instability • Air
Shao, Kan; Gift, Jeffrey S; Setzer, R Woodrow
2013-11-01
Continuous responses (e.g. body weight) are widely used in risk assessment for determining the benchmark dose (BMD) which is used to derive a U.S. EPA reference dose. One critical question that is not often addressed in dose-response assessments is whether to model the continuous data as normally or log-normally distributed. Additionally, if lognormality is assumed, and only summarized response data (i.e., mean±standard deviation) are available as is usual in the peer-reviewed literature, the BMD can only be approximated. In this study, using the "hybrid" method and relative deviation approach, we first evaluate six representative continuous dose-response datasets reporting individual animal responses to investigate the impact on BMD/BMDL estimates of (1) the distribution assumption and (2) the use of summarized versus individual animal data when a log-normal distribution is assumed. We also conduct simulation studies evaluating model fits to various known distributions to investigate whether the distribution assumption has influence on BMD/BMDL estimates. Our results indicate that BMDs estimated using the hybrid method are more sensitive to the distribution assumption than counterpart BMDs estimated using the relative deviation approach. The choice of distribution assumption has limited impact on the BMD/BMDL estimates when the within dose-group variance is small, while the lognormality assumption is a better choice for relative deviation method when data are more skewed because of its appropriateness in describing the relationship between mean and standard deviation. Additionally, the results suggest that the use of summarized data versus individual response data to characterize log-normal distributions has minimal impact on BMD estimates.
Team mental models: techniques, methods, and analytic approaches.
Langan-Fox, J; Code, S; Langfield-Smith, K
2000-01-01
Effective team functioning requires the existence of a shared or team mental model among members of a team. However, the best method for measuring team mental models is unclear. Methods reported vary in terms of how mental model content is elicited and analyzed or represented. We review the strengths and weaknesses of vatrious methods that have been used to elicit, represent, and analyze individual and team mental models and provide recommendations for method selection and development. We describe the nature of mental models and review techniques that have been used to elicit and represent them. We focus on a case study on selecting a method to examine team mental models in industry. The processes involved in the selection and development of an appropriate method for eliciting, representing, and analyzing team mental models are described. The criteria for method selection were (a) applicability to the problem under investigation; (b) practical considerations - suitability for collecting data from the targeted research sample; and (c) theoretical rationale - the assumption that associative networks in memory are a basis for the development of mental models. We provide an evaluation of the method matched to the research problem and make recommendations for future research. The practical applications of this research include the provision of a technique for analyzing team mental models in organizations, the development of methods and processes for eliciting a mental model from research participants in their normal work environment, and a survey of available methodologies for mental model research.
Time derivatives of the spectrum: Relaxing the stationarity assumption
Prieto, G. A.; Thomson, D. J.; Vernon, F. L.
2005-12-01
Spectrum analysis of seismic waveforms has played a significant role towards the understanding of multiple aspects of Earth structure and earthquake source physics. In recent years the multitaper spectrum estimation approach (Thomson, 1982) has been applied to geophysical problems providing not only reliable estimates of the spectrum, but also estimates of spectral uncertainties (Thomson and Chave, 1991). However, these improved spectral estimates were developed under the assumption of local stationarity and provide an incomplete description of the observed process. It is obvious that due to the intrinsic attenuation of the Earth, the amplitudes, and thus the frequency contents are changing with time as waves pass through a seismic station. There have been incredible improvements in different techniques to analyze non-stationary signals, including wavelet decomposition, Wigner-Ville spectrum and the dual-frequency spectrum. We apply one of the recently developed techniques, the Quadratic Inverse Theory (Thomson, 1990, 1994), combined with the multitaper technique to look at the time derivatives of the spectrum. If the spectrum is reasonably white in a certain bandwidth, using QI theory, we can estimate the derivatives of the spectrum at each frequency. We test synthetic signals to corroborate the approach and apply it the records of small earthquakes at local distances. This is a first approach to try and combine the classical spectrum analysis without the assumption of stationarity that is generally taken.
Directory of Open Access Journals (Sweden)
S. J. Noh
2011-10-01
Full Text Available Data assimilation techniques have received growing attention due to their capability to improve prediction. Among various data assimilation techniques, sequential Monte Carlo (SMC methods, known as "particle filters", are a Bayesian learning process that has the capability to handle non-linear and non-Gaussian state-space models. In this paper, we propose an improved particle filtering approach to consider different response times of internal state variables in a hydrologic model. The proposed method adopts a lagged filtering approach to aggregate model response until the uncertainty of each hydrologic process is propagated. The regularization with an additional move step based on the Markov chain Monte Carlo (MCMC methods is also implemented to preserve sample diversity under the lagged filtering approach. A distributed hydrologic model, water and energy transfer processes (WEP, is implemented for the sequential data assimilation through the updating of state variables. The lagged regularized particle filter (LRPF and the sequential importance resampling (SIR particle filter are implemented for hindcasting of streamflow at the Katsura catchment, Japan. Control state variables for filtering are soil moisture content and overland flow. Streamflow measurements are used for data assimilation. LRPF shows consistent forecasts regardless of the process noise assumption, while SIR has different values of optimal process noise and shows sensitive variation of confidential intervals, depending on the process noise. Improvement of LRPF forecasts compared to SIR is particularly found for rapidly varied high flows due to preservation of sample diversity from the kernel, even if particle impoverishment takes place.
LEXICAL APPROACH IN TEACHING TURKISH: A COLLOCATIONAL STUDY MODEL
National Research Council Canada - National Science Library
Eser ÖRDEM
2013-01-01
Abstract This study intends to propose Lexical Approach (Lewis, 1998, 2002; Harwood, 2002) and a model for teaching Turkish as a foreign language so that this model can be used in classroom settings...
Modeling of annular two-phase flow using a unified CFD approach
Energy Technology Data Exchange (ETDEWEB)
Li, Haipeng, E-mail: haipengl@kth.se; Anglart, Henryk, E-mail: henryk@kth.se
2016-07-15
Highlights: • Annular two-phase flow has been modeled using a unified CFD approach. • Liquid film was modeled based on a two-dimensional thin film assumption. • Both Eulerian and Lagrangian methods were employed for the gas core flow modeling. - Abstract: A mechanistic model of annular flow with evaporating liquid film has been developed using computational fluid dynamics (CFD). The model is employing a separate solver with two-dimensional conservation equations to predict propagation of a thin boiling liquid film on solid walls. The liquid film model is coupled to a solver of three-dimensional conservation equations describing the gas core, which is assumed to contain a saturated mixture of vapor and liquid droplets. Both the Eulerian–Eulerian and the Eulerian–Lagrangian approach are used to describe the droplet and vapor motion in the gas core. All the major interaction phenomena between the liquid film and the gas core flow have been accounted for, including the liquid film evaporation as well as the droplet deposition and entrainment. The resultant unified framework for annular flow has been applied to the steam-water flow with conditions typical for a Boiling Water Reactor (BWR). The simulation results for the liquid film flow rate show good agreement with the experimental data, with the potential to predict the dryout occurrence based on criteria of critical film thickness or critical film flow rate.
A model-based multisensor data fusion knowledge management approach
Straub, Jeremy
2014-06-01
A variety of approaches exist for combining data from multiple sensors. The model-based approach combines data based on its support for or refutation of elements of the model which in turn can be used to evaluate an experimental thesis. This paper presents a collection of algorithms for mapping various types of sensor data onto a thesis-based model and evaluating the truth or falsity of the thesis, based on the model. The use of this approach for autonomously arriving at findings and for prioritizing data are considered. Techniques for updating the model (instead of arriving at a true/false assertion) are also discussed.
Comparison of two novel approaches to model fibre reinforced concrete
Radtke, F.K.F.; Simone, A.; Sluys, L.J.
2009-01-01
We present two approaches to model fibre reinforced concrete. In both approaches, discrete fibre distributions and the behaviour of the fibre-matrix interface are explicitly considered. One approach employs the reaction forces from fibre to matrix while the other is based on the partition of unity f
Modelling the World Wool Market: A Hybrid Approach
2007-01-01
We present a model of the world wool market that merges two modelling traditions: the partialequilibrium commodity-specific approach and the computable general-equilibrium approach. The model captures the multistage nature of the wool production system, and the heterogeneous nature of raw wool, processed wool and wool garments. It also captures the important wool producing and consuming regions of the world. We illustrate the utility of the model by estimating the effects of tariff barriers o...
An algebraic approach to the Hubbard model
de Leeuw, Marius
2015-01-01
We study the algebraic structure of an integrable Hubbard-Shastry type lattice model associated with the centrally extended su(2|2) superalgebra. This superalgebra underlies Beisert's AdS/CFT worldsheet R-matrix and Shastry's R-matrix. The considered model specializes to the one-dimensional Hubbard model in a certain limit. We demonstrate that Yangian symmetries of the R-matrix specialize to the Yangian symmetry of the Hubbard model found by Korepin and Uglov. Moreover, we show that the Hubbard model Hamiltonian has an algebraic interpretation as the so-called secret symmetry. We also discuss Yangian symmetries of the A and B models introduced by Frolov and Quinn.
Numerical modelling approach for mine backfill
Indian Academy of Sciences (India)
MUHAMMAD ZAKA EMAD
2017-09-01
Numerical modelling is broadly used for assessing complex scenarios in underground mines, including mining sequence and blast-induced vibrations from production blasting. Sublevel stoping mining methods with delayed backfill are extensively used to exploit steeply dipping ore bodies by Canadian hard-rockmetal mines. Mine backfill is an important constituent of mining process. Numerical modelling of mine backfill material needs special attention as the numerical model must behave realistically and in accordance with the site conditions. This paper discusses a numerical modelling strategy for modelling mine backfill material. Themodelling strategy is studied using a case study mine from Canadian mining industry. In the end, results of numerical model parametric study are shown and discussed.
Evaluating The Markov Assumption For Web Usage Mining
DEFF Research Database (Denmark)
Jespersen, S.; Pedersen, Torben Bach; Thorhauge, J.
2003-01-01
Web usage mining concerns the discovery of common browsing patterns, i.e., pages requested in sequence, from web logs. To cope with the enormous amounts of data, several aggregated structures based on statistical models of web surfing have appeared, e.g., the Hypertext Probabilistic Grammar (HPG......) model~\\cite{borges99data}. These techniques typically rely on the \\textit{Markov assumption with history depth} $n$, i.e., it is assumed that the next requested page is only dependent on the last $n$ pages visited. This is not always valid, i.e. false browsing patterns may be discovered. However, to our...
Regularization of turbulence - a comprehensive modeling approach
Geurts, Bernard J.
2011-01-01
Turbulence readily arises in numerous flows in nature and technology. The large number of degrees of freedom of turbulence poses serious challenges to numerical approaches aimed at simulating and controlling such flows. While the Navier-Stokes equations are commonly accepted to precisely describe fl
Measuring equilibrium models: a multivariate approach
Directory of Open Access Journals (Sweden)
Nadji RAHMANIA
2011-04-01
Full Text Available This paper presents a multivariate methodology for obtaining measures of unobserved macroeconomic variables. The used procedure is the multivariate Hodrick-Prescot which depends on smoothing param eters. The choice of these parameters is crucial. Our approach is based on consistent estimators of these parameters, depending only on the observed data.
A graphical approach to analogue behavioural modelling
Moser, Vincent; Nussbaum, Pascal; Amann, Hans-Peter; Astier, Luc; Pellandini, Fausto
2007-01-01
In order to master the growing complexity of analogue electronic systems, modelling and simulation of analogue hardware at various levels is absolutely necessary. This paper presents an original modelling method based on the graphical description of analogue electronic functional blocks. This method is intended to be automated and integrated into a design framework: specialists create behavioural models of existing functional blocks, that can then be used through high-level selection and spec...
A geometrical approach to structural change modeling
Stijepic, Denis
2013-01-01
We propose a model for studying the dynamics of economic structures. The model is based on qualitative information regarding structural dynamics, in particular, (a) the information on the geometrical properties of trajectories (and their domains) which are studied in structural change theory and (b) the empirical information from stylized facts of structural change. We show that structural change is path-dependent in this model and use this fact to restrict the number of future structural cha...
Accumulating approach to the life-cycle pension model: practical advantages
Directory of Open Access Journals (Sweden)
Yaniv Azoulay
2016-12-01
Full Text Available In the present study, we make an effort to enhance the practical advantages of the life-cycle pension model. We observe that previous studies are based on a “switching”approach, that is, on the assumption that when a pension fund member reaches a certain age, his accumulated savings are fully switched to another fund with a lower risk profile; we suggest an “accumulating” approach, according to which, at the same age, the member’s previously accumulated wealth continues to be invested in the same fund, while his new regular pension contributions start being directed to another (less risky fund. We consider a hypothetical (average Israeli employee, analyze two age-dependent life-cycle investment distributions of his pension savings, and perform a comparison between the two approaches to the life-cycle model by employing an estimation-based and a simulation-based technique. The results demonstrate that the “accumulating” approach provides: (i higher estimated annualized real returns and real accumulated savings; (ii significantly higher simulated mean and median values of real accumulated savings. Moreover, we document that, though the “accumulating” approach increases the standard deviation of total savings, it does not lead to critically low pension wealth levels even for relatively unfavorable sequences of financial assets’ returns. Therefore, we conclude that the “accumulating” approach to the life-cycle model has a potential significantly to increase pension fund members’ total accumulated wealth relatively to the common “switching” approach, without significantly increasing the members’ risk.
Directory of Open Access Journals (Sweden)
Engin Oner
2015-06-01
Full Text Available Adam Smith being its founder, in the Classical School, which gives prominence to supply and adopts an approach of unbiased finance, the economy is always in a state of full employment equilibrium. In this system of thought, the main philosophy of which is budget balance, that asserts that there is flexibility between prices and wages and regards public debt as an extraordinary instrument, the interference of the state with the economic and social life is frowned upon. In line with the views of the classical thought, the classical fiscal policy is based on three basic assumptions. These are the "Consumer State Assumption", the assumption accepting that "Public Expenditures are Always Ineffectual" and the assumption concerning the "Impartiality of the Taxes and Expenditure Policies Implemented by the State". On the other hand, the Keynesian School founded by John Maynard Keynes, gives prominence to demand, adopts the approach of functional finance, and asserts that cases of underemployment equilibrium and over-employment equilibrium exist in the economy as well as the full employment equilibrium, that problems cannot be solved through the invisible hand, that prices and wages are strict, the interference of the state is essential and at this point fiscal policies have to be utilized effectively.Keynesian fiscal policy depends on three primary assumptions. These are the assumption of "Filter State", the assumption that "public expenditures are sometimes effective and sometimes ineffective or neutral" and the assumption that "the tax, debt and expenditure policies of the state can never be impartial".
Consumer preference models: fuzzy theory approach
Turksen, I. B.; Wilson, I. A.
1993-12-01
Consumer preference models are widely used in new product design, marketing management, pricing and market segmentation. The purpose of this article is to develop and test a fuzzy set preference model which can represent linguistic variables in individual-level models implemented in parallel with existing conjoint models. The potential improvements in market share prediction and predictive validity can substantially improve management decisions about what to make (product design), for whom to make it (market segmentation) and how much to make (market share prediction).
A New Approach for Magneto-Static Hysteresis Behavioral Modeling
DEFF Research Database (Denmark)
Astorino, Antonio; Swaminathan, Madhavan; Antonini, Giulio
2016-01-01
In this paper, a new behavioral modeling approach for magneto-static hysteresis is presented. Many accurate models are currently available, but none of them seems to be able to correctly reproduce all the possible B-H paths with low computational cost. By contrast, the approach proposed...... achieved when comparing the measured and simulated results....
Nucleon Spin Content in a Relativistic Quark Potential Model Approach
Institute of Scientific and Technical Information of China (English)
DONG YuBing; FENG QingGuo
2002-01-01
Based on a relativistic quark model approach with an effective potential U(r) = (ac/2)(1 + γ0)r2, the spin content of the nucleon is investigated. Pseudo-scalar interaction between quarks and Goldstone bosons is employed to calculate the couplings between the Goldstone bosons and the nucleon. Different approaches to deal with the center of mass correction in the relativistic quark potential model approach are discussed.
Stapleton, Lee M.; Garrod, Guy D.
2007-01-01
Using a range of statistical criteria rooted in Information Theory we show that there is little justification for relaxing the equal weights assumption underlying the United Nation's Human Development Index (HDI) even if the true HDI diverges significantly from this assumption. Put differently, the additional model complexity that unequal weights…
Meyers, Renee A.; Seibold, David R.
2009-01-01
In this article, the authors seek to augment Dean Hewes's (1986, 1996) intriguing bracketing and admirable larger effort to "return to basic theorizing in the study of group communication" by making transparent the foundational, and debatable, assumptions that underlie those models. Although these assumptions are addressed indirectly by Hewes, the…
Sensitivity of TRIM projections to management, harvest, yield, and stocking adjustment assumptions.
Susan J. Alexander
1991-01-01
The Timber Resource Inventory Model (TRIM) was used to make several projections of forest industry timber supply for the Douglas-fir region. The sensitivity of these projections to assumptions about management and yields is discussed. A base run is compared to runs in which yields were altered, stocking adjustment was eliminated, harvest assumptions were changed, and...
A simple approach to modeling ductile failure.
Energy Technology Data Exchange (ETDEWEB)
Wellman, Gerald William
2012-06-01
Sandia National Laboratories has the need to predict the behavior of structures after the occurrence of an initial failure. In some cases determining the extent of failure, beyond initiation, is required, while in a few cases the initial failure is a design feature used to tailor the subsequent load paths. In either case, the ability to numerically simulate the initiation and propagation of failures is a highly desired capability. This document describes one approach to the simulation of failure initiation and propagation.
An approach for activity-based DEVS model specification
DEFF Research Database (Denmark)
Alshareef, Abdurrahman; Sarjoughian, Hessam S.; Zarrin, Bahram
2016-01-01
activity-based behavior modeling of parallel DEVS atomic models. We consider UML activities and actions as fundamental units of behavior modeling, especially in the presence of recent advances in the UML 2.5 specifications. We describe in detail how to approach activity modeling with a set of elemental...
Advanced language modeling approaches, case study: Expert search
Hiemstra, Djoerd
2008-01-01
This tutorial gives a clear and detailed overview of advanced language modeling approaches and tools, including the use of document priors, translation models, relevance models, parsimonious models and expectation maximization training. Expert search will be used as a case study to explain the
Zischg, A.; Fuchs, S; M. Keiler; Meißl, G.
2005-01-01
International audience; The presented approach describes a model for a rule-based expert system calculating the temporal variability of the release of wet snow avalanches, using the assumption of avalanche triggering without the loading of new snow. The knowledge base of the model is created by using investigations on the system behaviour of wet snow avalanches in the Italian Ortles Alps, and is represented by a fuzzy logic rule-base. Input parameters of the expert system are numerical and li...
Directory of Open Access Journals (Sweden)
Ernest Benedito
2011-10-01
Full Text Available Purpose: This paper is concerned with a reverse logistic system where returns are stochastically dependents on sales. The aim of the paper is to assess the influence on optimal production capacities when is assumed that returns are stochastically independent of sales.Design/methodology/approach: This paper presents a model of the system. An approximated model where is assumed that returns are stochastically independent of sales, is formulated to obtain the optimal capacities. The optimal costs of the original and the approximated models are compared in order to assess the influence of the assumption made on returns.Findings: The assumption that returns are stochastically independent of sales is significant in few cases. Research limitations/implications: The impact of the assumption on returns is assessed indirectly, by comparing the optimal costs of both models: the original and approximated.Practical implications: The problem of calculating the optimal capacities in the original model is hard to solve, however in the approximated model the problem is tractable. When the impact of the assumption that returns are stochastically independent of sales is not significant, the approximated model can be used to calculate the optimal capacities of the original model.Originality/value: Prior to this paper, few papers have addressed with the problem of calculating the optimal capacities of reverse logistics systems. The models found in these papers assumed that returns are stochastically independent of sales.
Challenges and opportunities for integrating lake ecosystem modelling approaches
Mooij, Wolf M.; Trolle, Dennis; Jeppesen, Erik; Arhonditsis, George; Belolipetsky, Pavel V.; Chitamwebwa, Deonatus B.R.; Degermendzhy, Andrey G.; DeAngelis, Donald L.; Domis, Lisette N. De Senerpont; Downing, Andrea S.; Elliott, J. Alex; Ruberto, Carlos Ruberto; Gaedke, Ursula; Genova, Svetlana N.; Gulati, Ramesh D.; Hakanson, Lars; Hamilton, David P.; Hipsey, Matthew R.; Hoen, Jochem 't; Hulsmann, Stephan; Los, F. Hans; Makler-Pick, Vardit; Petzoldt, Thomas; Prokopkin, Igor G.; Rinke, Karsten; Schep, Sebastiaan A.; Tominaga, Koji; Van Dam, Anne A.; Van Nes, Egbert H.; Wells, Scott A.; Janse, Jan H.
2010-01-01
A large number and wide variety of lake ecosystem models have been developed and published during the past four decades. We identify two challenges for making further progress in this field. One such challenge is to avoid developing more models largely following the concept of others ('reinventing the wheel'). The other challenge is to avoid focusing on only one type of model, while ignoring new and diverse approaches that have become available ('having tunnel vision'). In this paper, we aim at improving the awareness of existing models and knowledge of concurrent approaches in lake ecosystem modelling, without covering all possible model tools and avenues. First, we present a broad variety of modelling approaches. To illustrate these approaches, we give brief descriptions of rather arbitrarily selected sets of specific models. We deal with static models (steady state and regression models), complex dynamic models (CAEDYM, CE-QUAL-W2, Delft 3D-ECO, LakeMab, LakeWeb, MyLake, PCLake, PROTECH, SALMO), structurally dynamic models and minimal dynamic models. We also discuss a group of approaches that could all be classified as individual based: super-individual models (Piscator, Charisma), physiologically structured models, stage-structured models and trait-based models. We briefly mention genetic algorithms, neural networks, Kalman filters and fuzzy logic. Thereafter, we zoom in, as an in-depth example, on the multi-decadal development and application of the lake ecosystem model PCLake and related models (PCLake Metamodel, Lake Shira Model, IPH-TRIM3D-PCLake). In the discussion, we argue that while the historical development of each approach and model is understandable given its 'leading principle', there are many opportunities for combining approaches. We take the point of view that a single 'right' approach does not exist and should not be strived for. Instead, multiple modelling approaches, applied concurrently to a given problem, can help develop an integrative
Inference and Assumption in Historical Seismology
Musson, R. M. W.
The principal aim in studies of historical earthquakes is usually to be able to derive parameters for past earthquakes from macroseismic or other data and thus extend back in time parametric earthquake catalogues, often with improved seismic hazard studies as the ultimate goal. In cases of relatively recent historical earthquakes, for example, those of the 18th and 19th centuries, it is often the case that there is such an abundance of available macroseismic data that estimating earthquake parameters is relatively straightforward. For earlier historical periods, especially medieval and earlier, and also for areas where settlement or documentation are sparse, the situation is much harder. The seismologist often finds that he has only a few data points (or even one) for an earthquake that nevertheless appears to be regionally significant.In such cases, it is natural that the investigator will attempt to make the most of the available data, expanding it by making working assumptions, and from these deriving conclusions by inference (i.e. the process of proceeding logically from some premise). This can be seen in a number of existing studies; in some cases extremely slight data are so magnified by the use of inference that one must regard the results as tentative in the extreme. Two main types of inference can be distinguished. The first type is inference from documentation. This is where assumptions are made such as: the absence of a report of the earthquake from this monastic chronicle indicates that at this locality the earthquake was not felt. The second type is inference from seismicity. Here one deals with arguments such as all recent earthquakes felt at town X are events occurring in seismic zone Y, therefore this ancient earthquake which is only reported at town X probably also occurred in this zone.
Directory of Open Access Journals (Sweden)
Mojtaba Ganjali
Full Text Available In this paper, the problem of identifying differentially expressed genes under different conditions using gene expression microarray data, in the presence of outliers, is discussed. For this purpose, the robust modeling of gene expression data using some powerful distributions known as normal/independent distributions is considered. These distributions include the Student's t and normal distributions which have been used previously, but also include extensions such as the slash, the contaminated normal and the Laplace distributions. The purpose of this paper is to identify differentially expressed genes by considering these distributional assumptions instead of the normal distribution. A Bayesian approach using the Markov Chain Monte Carlo method is adopted for parameter estimation. Two publicly available gene expression data sets are analyzed using the proposed approach. The use of the robust models for detecting differentially expressed genes is investigated. This investigation shows that the choice of model for differentiating gene expression data is very important. This is due to the small number of replicates for each gene and the existence of outlying data. Comparison of the performance of these models is made using different statistical criteria and the ROC curve. The method is illustrated using some simulation studies. We demonstrate the flexibility of these robust models in identifying differentially expressed genes.
A feature-based approach to modeling protein-DNA interactions.
Directory of Open Access Journals (Sweden)
Eilon Sharon
Full Text Available Transcription factor (TF binding to its DNA target site is a fundamental regulatory interaction. The most common model used to represent TF binding specificities is a position specific scoring matrix (PSSM, which assumes independence between binding positions. However, in many cases, this simplifying assumption does not hold. Here, we present feature motif models (FMMs, a novel probabilistic method for modeling TF-DNA interactions, based on log-linear models. Our approach uses sequence features to represent TF binding specificities, where each feature may span multiple positions. We develop the mathematical formulation of our model and devise an algorithm for learning its structural features from binding site data. We also developed a discriminative motif finder, which discovers de novo FMMs that are enriched in target sets of sequences compared to background sets. We evaluate our approach on synthetic data and on the widely used TF chromatin immunoprecipitation (ChIP dataset of Harbison et al. We then apply our algorithm to high-throughput TF ChIP data from mouse and human, reveal sequence features that are present in the binding specificities of mouse and human TFs, and show that FMMs explain TF binding significantly better than PSSMs. Our FMM learning and motif finder software are available at http://genie.weizmann.ac.il/.
Developing an Approach to Model UV Fluxes in Smoke Laden Conditions over Central Asia
Park, Y. H.; Sokolik, I. N.
2016-12-01
The UV characteristics of smoke aerosols are valuable inputs to the UV Index forecasts, air quality studies, and assessments of the impact of regional environmental changes. The wavelength dependence of light absorption by aerosol particles is pronounced throughout the UV spectral region. Also in assumption that smoke consists of BC, OC, and Sulfate, the mass fraction of smoke is different with fire types. The purpose of this study is to introduce an approach to calculate the UV fluxes in the aerosol laden conditions, and investigate the irradiance compared to measured irradiance in the UV spectrum. To compute the spectral optical properties (e.g., the effective single scattering albedo, asymmetry parameter, and aerosol optical thickness) for the selected scenarios, the representative size distribution and the refractive index are selected and used in the Mie code. Smoke aerosol information (e.g. emission injection height, mass concentration of smoke components) from WRF Chem is applied to run tropospheric ultraviolet and visible (TUV) model. Using the TUV model, we make the comparisons between model and measured irradiance in UV spectrum in smoke aerosol conditions. An advantage of this approach and the uncertainty of the evaluation are discussed. Overall, the results of this investigation show that this approach is valuable to estimate UV fluxes in smoke laden conditions.
Random matrix model approach to chiral symmetry
Verbaarschot, J J M
1996-01-01
We review the application of random matrix theory (RMT) to chiral symmetry in QCD. Starting from the general philosophy of RMT we introduce a chiral random matrix model with the global symmetries of QCD. Exact results are obtained for universal properties of the Dirac spectrum: i) finite volume corrections to valence quark mass dependence of the chiral condensate, and ii) microscopic fluctuations of Dirac spectra. Comparisons with lattice QCD simulations are made. Most notably, the variance of the number of levels in an interval containing $n$ levels on average is suppressed by a factor $(\\log n)/\\pi^2 n$. An extension of the random matrix model model to nonzero temperatures and chemical potential provides us with a schematic model of the chiral phase transition. In particular, this elucidates the nature of the quenched approximation at nonzero chemical potential.
Machine Learning Approaches for Modeling Spammer Behavior
Islam, Md Saiful; Islam, Md Rafiqul
2010-01-01
Spam is commonly known as unsolicited or unwanted email messages in the Internet causing potential threat to Internet Security. Users spend a valuable amount of time deleting spam emails. More importantly, ever increasing spam emails occupy server storage space and consume network bandwidth. Keyword-based spam email filtering strategies will eventually be less successful to model spammer behavior as the spammer constantly changes their tricks to circumvent these filters. The evasive tactics that the spammer uses are patterns and these patterns can be modeled to combat spam. This paper investigates the possibilities of modeling spammer behavioral patterns by well-known classification algorithms such as Na\\"ive Bayesian classifier (Na\\"ive Bayes), Decision Tree Induction (DTI) and Support Vector Machines (SVMs). Preliminary experimental results demonstrate a promising detection rate of around 92%, which is considerably an enhancement of performance compared to similar spammer behavior modeling research.
Infectious disease modeling a hybrid system approach
Liu, Xinzhi
2017-01-01
This volume presents infectious diseases modeled mathematically, taking seasonality and changes in population behavior into account, using a switched and hybrid systems framework. The scope of coverage includes background on mathematical epidemiology, including classical formulations and results; a motivation for seasonal effects and changes in population behavior, an investigation into term-time forced epidemic models with switching parameters, and a detailed account of several different control strategies. The main goal is to study these models theoretically and to establish conditions under which eradication or persistence of the disease is guaranteed. In doing so, the long-term behavior of the models is determined through mathematical techniques from switched systems theory. Numerical simulations are also given to augment and illustrate the theoretical results and to help study the efficacy of the control schemes.
Second Quantization Approach to Stochastic Epidemic Models
Mondaini, Leonardo
2015-01-01
We show how the standard field theoretical language based on creation and annihilation operators may be used for a straightforward derivation of closed master equations describing the population dynamics of multivariate stochastic epidemic models. In order to do that, we introduce an SIR-inspired stochastic model for hepatitis C virus epidemic, from which we obtain the time evolution of the mean number of susceptible, infected, recovered and chronically infected individuals in a population whose total size is allowed to change.
"Dispersion modeling approaches for near road | Science ...
Roadway design and roadside barriers can have significant effects on the dispersion of traffic-generated pollutants, especially in the near-road environment. Dispersion models that can accurately simulate these effects are needed to fully assess these impacts for a variety of applications. For example, such models can be useful for evaluating the mitigation potential of roadside barriers in reducing near-road exposures and their associated adverse health effects. Two databases, a tracer field study and a wind tunnel study, provide measurements used in the development and/or validation of algorithms to simulate dispersion in the presence of noise barriers. The tracer field study was performed in Idaho Falls, ID, USA with a 6-m noise barrier and a finite line source in a variety of atmospheric conditions. The second study was performed in the meteorological wind tunnel at the US EPA and simulated line sources at different distances from a model noise barrier to capture the effect on emissions from individual lanes of traffic. In both cases, velocity and concentration measurements characterized the effect of the barrier on dispersion.This paper presents comparisons with the two datasets of the barrier algorithms implemented in two different dispersion models: US EPA’s R-LINE (a research dispersion modelling tool under development by the US EPA’s Office of Research and Development) and CERC’s ADMS model (ADMS-Urban). In R-LINE the physical features reveal
Flipped models in Trinification: A Comprehensive Approach
Rodríguez, Oscar; Ponce, William A; Rojas, Eduardo
2016-01-01
By considering the 3-3-1 and the left-right symmetric models as low energy effective theories of the trinification group, alternative versions of these models are found. The new neutral gauge bosons in the universal 3-3-1 model and its flipped versions are considered; also, the left-right symmetric model and the two flipped variants of it are also studied. For these models, the couplings of the $Z'$ bosons to the standard model fermions are reported. The explicit form of the null space of the vector boson mass matrix for an arbitrary Higgs tensor and gauge group is also presented. In the general framework of the trinification gauge group, and by using the LHC experimental results and EW precision data, limits on the $Z'$ mass and the mixing angle between $Z$ and the new gauge bosons $Z'$ are imposed. The general results call for very small mixing angles in the range $10^{-3}$ radians and $M_{Z'}$ > 2.5 TeV.
Dean, D. W.; Illangasekare, T. H.; Turner, A.; Russell, T. F.
2004-12-01
Modeling of the complex behavior of DNAPLs in naturally heterogeneous subsurface formations poses many challenges. Even though considerable progress have been made in developing improved numerical schemes to solve the governing partial differential equations, most of these methods still rely on deterministic description of the processes. This research explores the use of stochastic differential equations to model multiphase flow in heterogeneous aquifers, specifically the flow of DNAPLs in saturated soils. The models developed are evaluated using experimental data generated in two-dimensional test systems. A fundamental assumption used in the model formulation is that the movement of a fluid particle in each phase is described by a stochastic process and that the positions of all fluid particles over time are governed by a specific law. It is this law, which we seek to determine. The approach results in a nonlinear stochastic differential equation describing the position of the non-wetting phase fluid particle. The nonlinearity in the stochastic differential equation arises because both the drift and diffusion coefficients depend on the volumetric fraction of the phase, which in turn depends on the position of the fluid particles in the problem domain. The concept of a fluid particle is central to the development of the proposed model. Expressions for both saturation and volumetric fraction are developed using this concept of fluid particle. Darcy's law and the continuity equation are used to derive a Fokker-Planck equation governing flow. The Ito calculus is then applied to derive a stochastic differential equation(SDE) for the non-wetting phase. This SDE has both drift and diffusion terms which depend on the volumetric fraction of the non-wetting phase. Standard stochastic theories based on the Ito calculus and the Wiener process and the equivalent Fokker-Planck PDE's are typically used to model diffusion processes. However, these models, in their usual form
Lightweight approach to model traceability in a CASE tool
Vileiniskis, Tomas; Skersys, Tomas; Pavalkis, Saulius; Butleris, Rimantas; Butkiene, Rita
2017-07-01
A term "model-driven" is not at all a new buzzword within the ranks of system development community. Nevertheless, the ever increasing complexity of model-driven approaches keeps fueling all kinds of discussions around this paradigm and pushes researchers forward to research and develop new and more effective ways to system development. With the increasing complexity, model traceability, and model management as a whole, becomes indispensable activities of model-driven system development process. The main goal of this paper is to present a conceptual design and implementation of a practical lightweight approach to model traceability in a CASE tool.
An approach to thermochemical modeling of nuclear waste glass
Energy Technology Data Exchange (ETDEWEB)
Besmann, T.M.; Beahm, E.C. [Oak Ridge National Lab., TN (United States); Spear, K.E. [Pennsylvania State Univ., University Park, PA (United States)
1998-11-01
This initial work is aimed at developing a basic understanding of the phase equilibria and solid solution behavior of the constituents of waste glass. Current, experimentally determined values are less than desirable since they depend on measurement of the leach rate under non-realistic conditions designed to accelerate processes that occur on a geologic time scale. The often-used assumption that the activity of a species is either unity or equal to the overall concentration of the metal can also yield misleading results. The associate species model, a recent development in thermochemical modeling, will be applied to these systems to more accurately predict chemical activities in such complex systems as waste glasses.
Manufacturing Excellence Approach to Business Performance Model
Directory of Open Access Journals (Sweden)
Jesus Cruz Alvarez
2015-03-01
Full Text Available Six Sigma, lean manufacturing, total quality management, quality control, and quality function deployment are the fundamental set of tools to enhance productivity in organizations. There is some research that outlines the benefit of each tool into a particular context of firm´s productivity, but not into a broader context of firm´s competitiveness that is achieved thru business performance. The aim of this theoretical research paper is to contribute to this mean and propose a manufacturing excellence approach that links productivity tools into a broader context of business performance.
A Bayesian Model Committee Approach to Forecasting Global Solar Radiation
Lauret, Philippe; Muselli, Marc; David, Mathieu; Diagne, Hadja; Voyant, Cyril
2012-01-01
This paper proposes to use a rather new modelling approach in the realm of solar radiation forecasting. In this work, two forecasting models: Autoregressive Moving Average (ARMA) and Neural Network (NN) models are combined to form a model committee. The Bayesian inference is used to affect a probability to each model in the committee. Hence, each model's predictions are weighted by their respective probability. The models are fitted to one year of hourly Global Horizontal Irradiance (GHI) measurements. Another year (the test set) is used for making genuine one hour ahead (h+1) out-of-sample forecast comparisons. The proposed approach is benchmarked against the persistence model. The very first results show an improvement brought by this approach.
MDA based-approach for UML Models Complete Comparison
Chaouni, Samia Benabdellah; Mouline, Salma
2011-01-01
If a modeling task is distributed, it will frequently be necessary to integrate models developed by different team members. Problems occur in the models integration step and particularly, in the comparison phase of the integration. This issue had been discussed in several domains and various models. However, previous approaches have not correctly handled the semantic comparison. In the current paper, we provide a MDA-based approach for models comparison which aims at comparing UML models. We develop an hybrid approach which takes into account syntactic, semantic and structural comparison aspects. For this purpose, we use the domain ontology as well as other resources such as dictionaries. We propose a decision support system which permits the user to validate (or not) correspondences extracted in the comparison phase. For implementation, we propose an extension of the generic correspondence metamodel AMW in order to transform UML models to the correspondence model.
Modeling Misbehavior in Cooperative Diversity: A Dynamic Game Approach
Directory of Open Access Journals (Sweden)
Sintayehu Dehnie
2009-01-01
Full Text Available Cooperative diversity protocols are designed with the assumption that terminals always help each other in a socially efficient manner. This assumption may not be valid in commercial wireless networks where terminals may misbehave for selfish or malicious intentions. The presence of misbehaving terminals creates a social-dilemma where terminals exhibit uncertainty about the cooperative behavior of other terminals in the network. Cooperation in social-dilemma is characterized by a suboptimal Nash equilibrium where wireless terminals opt out of cooperation. Hence, without establishing a mechanism to detect and mitigate effects of misbehavior, it is difficult to maintain a socially optimal cooperation. In this paper, we first examine effects of misbehavior assuming static game model and show that cooperation under existing cooperative protocols is characterized by a noncooperative Nash equilibrium. Using evolutionary game dynamics we show that a small number of mutants can successfully invade a population of cooperators, which indicates that misbehavior is an evolutionary stable strategy (ESS. Our main goal is to design a mechanism that would enable wireless terminals to select reliable partners in the presence of uncertainty. To this end, we formulate cooperative diversity as a dynamic game with incomplete information. We show that the proposed dynamic game formulation satisfied the conditions for the existence of perfect Bayesian equilibrium.
A consortium approach to glass furnace modeling.
Energy Technology Data Exchange (ETDEWEB)
Chang, S.-L.; Golchert, B.; Petrick, M.
1999-04-20
Using computational fluid dynamics to model a glass furnace is a difficult task for any one glass company, laboratory, or university to accomplish. The task of building a computational model of the furnace requires knowledge and experience in modeling two dissimilar regimes (the combustion space and the liquid glass bath), along with the skill necessary to couple these two regimes. Also, a detailed set of experimental data is needed in order to evaluate the output of the code to ensure that the code is providing proper results. Since all these diverse skills are not present in any one research institution, a consortium was formed between Argonne National Laboratory, Purdue University, Mississippi State University, and five glass companies in order to marshal these skills into one three-year program. The objective of this program is to develop a fully coupled, validated simulation of a glass melting furnace that may be used by industry to optimize the performance of existing furnaces.
Mixture modeling approach to flow cytometry data.
Boedigheimer, Michael J; Ferbas, John
2008-05-01
Flow Cytometry has become a mainstay technique for measuring fluorescent and physical attributes of single cells in a suspended mixture. These data are reduced during analysis using a manual or semiautomated process of gating. Despite the need to gate data for traditional analyses, it is well recognized that analyst-to-analyst variability can impact the dataset. Moreover, cells of interest can be inadvertently excluded from the gate, and relationships between collected variables may go unappreciated because they were not included in the original analysis plan. A multivariate non-gating technique was developed and implemented that accomplished the same goal as traditional gating while eliminating many weaknesses. The procedure was validated against traditional gating for analysis of circulating B cells in normal donors (n = 20) and persons with Systemic Lupus Erythematosus (n = 42). The method recapitulated relationships in the dataset while providing for an automated and objective assessment of the data. Flow cytometry analyses are amenable to automated analytical techniques that are not predicated on discrete operator-generated gates. Such alternative approaches can remove subjectivity in data analysis, improve efficiency and may ultimately enable construction of large bioinformatics data systems for more sophisticated approaches to hypothesis testing.
BUSINESS MODEL IN ELECTRICITY INDUSTRY USING BUSINESS MODEL CANVAS APPROACH; THE CASE OF PT. XYZ
National Research Council Canada - National Science Library
Wicaksono, Achmad Arief; Syarief, Rizal; Suparno, Ono
2017-01-01
.... This study aims to identify company's business model using Business Model Canvas approach, formulate business development strategy alternatives, and determine the prioritized business development...
"Dispersion modeling approaches for near road
Roadway design and roadside barriers can have significant effects on the dispersion of traffic-generated pollutants, especially in the near-road environment. Dispersion models that can accurately simulate these effects are needed to fully assess these impacts for a variety of app...
and Models: A Self-Similar Approach
Directory of Open Access Journals (Sweden)
José Antonio Belinchón
2013-01-01
equations (FEs admit self-similar solutions. The methods employed allow us to obtain general results that are valid not only for the FRW metric, but also for all the Bianchi types as well as for the Kantowski-Sachs model (under the self-similarity hypothesis and the power-law hypothesis for the scale factors.
Nonperturbative approach to the modified statistical model
Energy Technology Data Exchange (ETDEWEB)
Magdy, M.A.; Bekmezci, A.; Sever, R. [Middle East Technical Univ., Ankara (Turkey)
1993-12-01
The modified form of the statistical model is used without making any perturbation. The mass spectra of the lowest S, P and D levels of the (Q{bar Q}) and the non-self-conjugate (Q{bar q}) mesons are studied with the Song-Lin potential. The authors results are in good agreement with the experimental and theoretical findings.
System Behavior Models: A Survey of Approaches
2016-06-01
Mandana Vaziri, and Frank Tip. 2007. “Finding Bugs Efficiently with a SAT Solver.” In European Software Engineering Conference and the ACM SIGSOFT...Van Gorp. 2005. “A Taxonomy of Model Transformation.” Electronic Notes in Theoretical Computer Science 152: 125–142. Miyazawa, Alvaro, and Ana
A moving approach for the Vector Hysteron Model
Energy Technology Data Exchange (ETDEWEB)
Cardelli, E. [Department of Engineering, University of Perugia, Via G. Duranti 93, 06125 Perugia (Italy); Faba, A., E-mail: antonio.faba@unipg.it [Department of Engineering, University of Perugia, Via G. Duranti 93, 06125 Perugia (Italy); Laudani, A. [Department of Engineering, Roma Tre University, Via V. Volterra 62, 00146 Rome (Italy); Quondam Antonio, S. [Department of Engineering, University of Perugia, Via G. Duranti 93, 06125 Perugia (Italy); Riganti Fulginei, F.; Salvini, A. [Department of Engineering, Roma Tre University, Via V. Volterra 62, 00146 Rome (Italy)
2016-04-01
A moving approach for the VHM (Vector Hysteron Model) is here described, to reconstruct both scalar and rotational magnetization of electrical steels with weak anisotropy, such as the non oriented grain Silicon steel. The hysterons distribution is postulated to be function of the magnetization state of the material, in order to overcome the practical limitation of the congruency property of the standard VHM approach. By using this formulation and a suitable accommodation procedure, the results obtained indicate that the model is accurate, in particular in reproducing the experimental behavior approaching to the saturation region, allowing a real improvement respect to the previous approach.
Directory of Open Access Journals (Sweden)
T. Raatikainen
2012-01-01
Full Text Available This paper presents an approach to study droplet activation kinetics from measurements of CCN activity by the Continuous Flow Streamwise Thermal Gradient CCN Chamber (CFSTGC and a comprehensive model of the instrument and droplet growth. The model is evaluated against a series of experiments with ammonium sulfate calibration aerosol. Observed and model predicted droplet sizes are in excellent agreement for a water vapor uptake coefficient ~0.2, which is consistent with theoretical expectations. The model calculations can be considerably accelerated without significant loss of accuracy by assuming simplified instrument geometry and constant parabolic flow velocity profiles. With these assumptions, the model can be applied to large experimental data sets (to infer kinetic growth parameters while fully accounting for water vapor depletion effects and changes in instrument operation parameters such as the column temperature, flow rates, sheath and sample flow relative humidities, and pressure. When the effects of instrument operation parameters, water vapor depletion and equilibrium dry particle properties on droplet size are accounted for, the remaining variations in droplet size are most likely due to non-equilibrium processes such as those caused by organic surface films, slow solute dissociation and glassy or highly viscous particle states. As an example of model application, data collected during a research flight in the ARCTAS 2008 campaign are analyzed. The model shows that water vapor depletion effects can explain changes in the observed average droplet size.
Directory of Open Access Journals (Sweden)
Lina Zgaga
Full Text Available Vitamin D deficiency has been associated with increased risk of colorectal cancer (CRC, but causal relationship has not yet been confirmed. We investigate the direction of causation between vitamin D and CRC by extending the conventional approaches to allow pleiotropic relationships and by explicitly modelling unmeasured confounders.Plasma 25-hydroxyvitamin D (25-OHD, genetic variants associated with 25-OHD and CRC, and other relevant information was available for 2645 individuals (1057 CRC cases and 1588 controls and included in the model. We investigate whether 25-OHD is likely to be causally associated with CRC, or vice versa, by selecting the best modelling hypothesis according to Bayesian predictive scores. We examine consistency for a range of prior assumptions.Model comparison showed preference for the causal association between low 25-OHD and CRC over the reverse causal hypothesis. This was confirmed for posterior mean deviances obtained for both models (11.5 natural log units in favour of the causal model, and also for deviance information criteria (DIC computed for a range of prior distributions. Overall, models ignoring hidden confounding or pleiotropy had significantly poorer DIC scores.Results suggest causal association between 25-OHD and colorectal cancer, and support the need for randomised clinical trials for further confirmations.
Model-free prediction and regression a transformation-based approach to inference
Politis, Dimitris N
2015-01-01
The Model-Free Prediction Principle expounded upon in this monograph is based on the simple notion of transforming a complex dataset to one that is easier to work with, e.g., i.i.d. or Gaussian. As such, it restores the emphasis on observable quantities, i.e., current and future data, as opposed to unobservable model parameters and estimates thereof, and yields optimal predictors in diverse settings such as regression and time series. Furthermore, the Model-Free Bootstrap takes us beyond point prediction in order to construct frequentist prediction intervals without resort to unrealistic assumptions such as normality. Prediction has been traditionally approached via a model-based paradigm, i.e., (a) fit a model to the data at hand, and (b) use the fitted model to extrapolate/predict future data. Due to both mathematical and computational constraints, 20th century statistical practice focused mostly on parametric models. Fortunately, with the advent of widely accessible powerful computing in the late 1970s, co...
Integration models: multicultural and liberal approaches confronted
Janicki, Wojciech
2012-01-01
European societies have been shaped by their Christian past, upsurge of international migration, democratic rule and liberal tradition rooted in religious tolerance. Boosting globalization processes impose new challenges on European societies, striving to protect their diversity. This struggle is especially clearly visible in case of minorities trying to resist melting into mainstream culture. European countries' legal systems and cultural policies respond to these efforts in many ways. Respecting identity politics-driven group rights seems to be the most common approach, resulting in creation of a multicultural society. However, the outcome of respecting group rights may be remarkably contradictory to both individual rights growing out from liberal tradition, and to reinforced concept of integration of immigrants into host societies. The hereby paper discusses identity politics upturn in the context of both individual rights and integration of European societies.
Dynamic Group Diffie-Hellman Key Exchange under standard assumptions
Energy Technology Data Exchange (ETDEWEB)
Bresson, Emmanuel; Chevassut, Olivier; Pointcheval, David
2002-02-14
Authenticated Diffie-Hellman key exchange allows two principals communicating over a public network, and each holding public-private keys, to agree on a shared secret value. In this paper we study the natural extension of this cryptographic problem to a group of principals. We begin from existing formal security models and refine them to incorporate major missing details (e.g., strong-corruption and concurrent sessions). Within this model we define the execution of a protocol for authenticated dynamic group Diffie-Hellman and show that it is provably secure under the decisional Diffie-Hellman assumption. Our security result holds in the standard model and thus provides better security guarantees than previously published results in the random oracle model.
ISM Approach to Model Offshore Outsourcing Risks
Directory of Open Access Journals (Sweden)
Sunand Kumar
2014-07-01
Full Text Available In an effort to achieve a competitive advantage via cost reductions and improved market responsiveness, organizations are increasingly employing offshore outsourcing as a major component of their supply chain strategies. But as evident from literature number of risks such as Political risk, Risk due to cultural differences, Compliance and regulatory risk, Opportunistic risk and Organization structural risk, which adversely affect the performance of offshore outsourcing in a supply chain network. This also leads to dissatisfaction among different stake holders. The main objective of this paper is to identify and understand the mutual interaction among various risks which affect the performance of offshore outsourcing. To this effect, authors have identified various risks through extant review of literature. From this information, an integrated model using interpretive structural modelling (ISM for risks affecting offshore outsourcing is developed and the structural relationships between these risks are modeled. Further, MICMAC analysis is done to analyze the driving power and dependency of risks which shall be helpful to managers to identify and classify important criterions and to reveal the direct and indirect effects of each criterion on offshore outsourcing. Results show that political risk and risk due to cultural differences are act as strong drivers.
Studies on the effect of flaw detection probability assumptions on risk reduction at inspection
Energy Technology Data Exchange (ETDEWEB)
Simola, K.; Cronvall, O.; Maennistoe, I. (VTT Technical Research Centre of Finland (Finland)); Gunnars, J.; Alverlind, L.; Dillstroem, P. (Inspecta Technology, Stockholm (Sweden)); Gandossi, L. (European Commission Joint Research Centre, Brussels (Belgium))
2009-12-15
The aim of the project was to study the effect of POD assumptions on failure probability using structural reliability models. The main interest was to investigate whether it is justifiable to use a simplified POD curve e.g. in risk-informed in-service inspection (RI-ISI) studies. The results of the study indicate that the use of a simplified POD curve could be justifiable in RI-ISI applications. Another aim was to compare various structural reliability calculation approaches for a set of cases. Through benchmarking one can identify differences and similarities between modelling approaches, and provide added confidence on models and identify development needs. Comparing the leakage probabilities calculated by different approaches at the end of plant lifetime (60 years) shows that the results are very similar when inspections are not accounted for. However, when inspections are taken into account the predicted order of magnitude differs. Further studies would be needed to investigate the reasons for the differences. Development needs and plans for the benchmarked structural reliability models are discussed. (author)
Studies on the effect of flaw detection probability assumptions on risk reduction at inspection
Energy Technology Data Exchange (ETDEWEB)
Simola, K.; Cronvall, O.; Maennistoe, I. (VTT Technical Research Centre of Finland (Finland)); Gunnars, J.; Alverlind, L.; Dillstroem, P. (Inspecta Technology, Stockholm (Sweden)); Gandossi, L. (European Commission Joint Research Centre, Brussels (Belgium))
2009-12-15
The aim of the project was to study the effect of POD assumptions on failure probability using structural reliability models. The main interest was to investigate whether it is justifiable to use a simplified POD curve e.g. in risk-informed in-service inspection (RI-ISI) studies. The results of the study indicate that the use of a simplified POD curve could be justifiable in RI-ISI applications. Another aim was to compare various structural reliability calculation approaches for a set of cases. Through benchmarking one can identify differences and similarities between modelling approaches, and provide added confidence on models and identify development needs. Comparing the leakage probabilities calculated by different approaches at the end of plant lifetime (60 years) shows that the results are very similar when inspections are not accounted for. However, when inspections are taken into account the predicted order of magnitude differs. Further studies would be needed to investigate the reasons for the differences. Development needs and plans for the benchmarked structural reliability models are discussed. (author)
Temporal Distinctiveness in Task Switching: Assessing the Mixture-Distribution Assumption
Directory of Open Access Journals (Sweden)
James A Grange
2016-02-01
Full Text Available In task switching, increasing the response--cue interval has been shown to reduce the switch cost. This has been attributed to a time-based decay process influencing the activation of memory representations of tasks (task-sets. Recently, an alternative account based on interference rather than decay has been successfully applied to this data (Horoufchin et al., 2011. In this account, variation of the RCI is thought to influence the temporal distinctiveness (TD of episodic traces in memory, thus affecting their retrieval probability. This can affect performance as retrieval probability influences response time: If retrieval succeeds, responding is fast due to positive priming; if retrieval fails, responding is slow, due to having to perform the task via a slow algorithmic process. This account---and a recent formal model (Grange & Cross, 2015---makes the strong prediction that all RTs are a mixture of one of two processes: a fast process when retrieval succeeds, and a slow process when retrieval fails. The present paper assesses the evidence for this mixture-distribution assumption in TD data. In a first section, statistical evidence for mixture-distributions is found using the fixed-point property test. In a second section, a mathematical process model with mixture-distributions at its core is fitted to the response time distribution data. Both approaches provide good evidence in support of the mixture-distribution assumption, and thus support temporal distinctiveness accounts of the data.
Dynamic Metabolic Model Building Based on the Ensemble Modeling Approach
Energy Technology Data Exchange (ETDEWEB)
Liao, James C. [Univ. of California, Los Angeles, CA (United States)
2016-10-01
Ensemble modeling of kinetic systems addresses the challenges of kinetic model construction, with respect to parameter value selection, and still allows for the rich insights possible from kinetic models. This project aimed to show that constructing, implementing, and analyzing such models is a useful tool for the metabolic engineering toolkit, and that they can result in actionable insights from models. Key concepts are developed and deliverable publications and results are presented.
Towards New Probabilistic Assumptions in Business Intelligence
Directory of Open Access Journals (Sweden)
Schumann Andrew
2015-01-01
Full Text Available One of the main assumptions of mathematical tools in science is represented by the idea of measurability and additivity of reality. For discovering the physical universe additive measures such as mass, force, energy, temperature, etc. are used. Economics and conventional business intelligence try to continue this empiricist tradition and in statistical and econometric tools they appeal only to the measurable aspects of reality. However, a lot of important variables of economic systems cannot be observable and additive in principle. These variables can be called symbolic values or symbolic meanings and studied within symbolic interactionism, the theory developed since George Herbert Mead and Herbert Blumer. In statistical and econometric tools of business intelligence we accept only phenomena with causal connections measured by additive measures. In the paper we show that in the social world we deal with symbolic interactions which can be studied by non-additive labels (symbolic meanings or symbolic values. For accepting the variety of such phenomena we should avoid additivity of basic labels and construct a new probabilistic method in business intelligence based on non-Archimedean probabilities.
A modular approach to numerical human body modeling
Forbes, P.A.; Griotto, G.; Rooij, L. van
2007-01-01
The choice of a human body model for a simulated automotive impact scenario must take into account both accurate model response and computational efficiency as key factors. This study presents a "modular numerical human body modeling" approach which allows the creation of a customized human body mod
A BEHAVIORAL-APPROACH TO LINEAR EXACT MODELING
ANTOULAS, AC; WILLEMS, JC
1993-01-01
The behavioral approach to system theory provides a parameter-free framework for the study of the general problem of linear exact modeling and recursive modeling. The main contribution of this paper is the solution of the (continuous-time) polynomial-exponential time series modeling problem. Both re
A modular approach to numerical human body modeling
Forbes, P.A.; Griotto, G.; Rooij, L. van
2007-01-01
The choice of a human body model for a simulated automotive impact scenario must take into account both accurate model response and computational efficiency as key factors. This study presents a "modular numerical human body modeling" approach which allows the creation of a customized human body
A market model for stochastic smile: a conditional density approach
Zilber, A.
2005-01-01
The purpose of this paper is to introduce a new approach that allows to construct no-arbitrage market models of for implied volatility surfaces (in other words, stochastic smile models). That is to say, the idea presented here allows us to model prices of liquidly traded vanilla options as separate
Economic Growth Assumptions in Climate and Energy Policy
Directory of Open Access Journals (Sweden)
Nir Y. Krakauer
2014-03-01
Full Text Available The assumption that the economic growth seen in recent decades will continue has dominated the discussion of future greenhouse gas emissions and the mitigation of and adaptation to climate change. Given that long-term economic growth is uncertain, the impacts of a wide range of growth trajectories should be considered. In particular, slower economic growth would imply that future generations will be relatively less able to invest in emissions controls or adapt to the detrimental impacts of climate change. Taking into consideration the possibility of economic slowdown therefore heightens the urgency of reducing greenhouse gas emissions now by moving to renewable energy sources, even if this incurs short-term economic cost. I quantify this counterintuitive impact of economic growth assumptions on present-day policy decisions in a simple global economy-climate model (Dynamic Integrated model of Climate and the Economy (DICE. In DICE, slow future growth increases the economically optimal present-day carbon tax rate and the utility of taxing carbon emissions, although the magnitude of the increase is sensitive to model parameters, including the rate of social time preference and the elasticity of the marginal utility of consumption. Future scenario development should specifically include low-growth scenarios, and the possibility of low-growth economic trajectories should be taken into account in climate policy analyses.
Thermoplasmonics modeling: A Green's function approach
Baffou, Guillaume; Quidant, Romain; Girard, Christian
2010-10-01
We extend the discrete dipole approximation (DDA) and the Green’s dyadic tensor (GDT) methods—previously dedicated to all-optical simulations—to investigate the thermodynamics of illuminated plasmonic nanostructures. This extension is based on the use of the thermal Green’s function and a original algorithm that we named Laplace matrix inversion. It allows for the computation of the steady-state temperature distribution throughout plasmonic systems. This hybrid photothermal numerical method is suited to investigate arbitrarily complex structures. It can take into account the presence of a dielectric planar substrate and is simple to implement in any DDA or GDT code. Using this numerical framework, different applications are discussed such as thermal collective effects in nanoparticles assembly, the influence of a substrate on the temperature distribution and the heat generation in a plasmonic nanoantenna. This numerical approach appears particularly suited for new applications in physics, chemistry, and biology such as plasmon-induced nanochemistry and catalysis, nanofluidics, photothermal cancer therapy, or phase-transition control at the nanoscale.
Agribusiness model approach to territorial food development
Directory of Open Access Journals (Sweden)
Murcia Hector Horacio
2011-04-01
Full Text Available
Several research efforts have coordinated the academic program of Agricultural Business Management from the University De La Salle (Bogota D.C., to the design and implementation of a sustainable agribusiness model applied to food development, with territorial projection. Rural development is considered as a process that aims to improve the current capacity and potential of the inhabitant of the sector, which refers not only to production levels and productivity of agricultural items. It takes into account the guidelines of the Organization of the United Nations “Millennium Development Goals” and considered the concept of sustainable food and agriculture development, including food security and nutrition in an integrated interdisciplinary context, with holistic and systemic dimension. Analysis is specified by a model with an emphasis on sustainable agribusiness production chains related to agricultural food items in a specific region. This model was correlated with farm (technical objectives, family (social purposes and community (collective orientations projects. Within this dimension are considered food development concepts and methodologies of Participatory Action Research (PAR. Finally, it addresses the need to link the results to low-income communities, within the concepts of the “new rurality”.
Coupling approaches used in atmospheric entry models
Gritsevich, M. I.
2012-09-01
While a planet orbits the Sun, it is subject to impact by smaller objects, ranging from tiny dust particles and space debris to much larger asteroids and comets. Such collisions have taken place frequently over geological time and played an important role in the evolution of planets and the development of life on the Earth. Though the search for near-Earth objects addresses one of the main points of the Asteroid and Comet Hazard, one should not underestimate the useful information to be gleaned from smaller atmospheric encounters, known as meteors or fireballs. Not only do these events help determine the linkages between meteorites and their parent bodies; due to their relative regularity they provide a good statistical basis for analysis. For successful cases with found meteorites, the detailed atmospheric path record is an excellent tool to test and improve existing entry models assuring the robustness of their implementation. There are many more important scientific questions meteoroids help us to answer, among them: Where do these objects come from, what are their origins, physical properties and chemical composition? What are the shapes and bulk densities of the space objects which fully ablate in an atmosphere and do not reach the planetary surface? Which values are directly measured and which are initially assumed as input to various models? How to couple both fragmentation and ablation effects in the model, taking real size distribution of fragments into account? How to specify and speed up the recovery of a recently fallen meteorites, not letting weathering to affect samples too much? How big is the pre-atmospheric projectile to terminal body ratio in terms of their mass/volume? Which exact parameters beside initial mass define this ratio? More generally, how entering object affects Earth's atmosphere and (if applicable) Earth's surface? How to predict these impact consequences based on atmospheric trajectory data? How to describe atmospheric entry
Applied Regression Modeling A Business Approach
Pardoe, Iain
2012-01-01
An applied and concise treatment of statistical regression techniques for business students and professionals who have little or no background in calculusRegression analysis is an invaluable statistical methodology in business settings and is vital to model the relationship between a response variable and one or more predictor variables, as well as the prediction of a response value given values of the predictors. In view of the inherent uncertainty of business processes, such as the volatility of consumer spending and the presence of market uncertainty, business professionals use regression a
Bayesian Approach to Neuro-Rough Models for Modelling HIV
Marwala, Tshilidzi
2007-01-01
This paper proposes a new neuro-rough model for modelling the risk of HIV from demographic data. The model is formulated using Bayesian framework and trained using Markov Chain Monte Carlo method and Metropolis criterion. When the model was tested to estimate the risk of HIV infection given the demographic data it was found to give the accuracy of 62% as opposed to 58% obtained from a Bayesian formulated rough set model trained using Markov chain Monte Carlo method and 62% obtained from a Bayesian formulated multi-layered perceptron (MLP) model trained using hybrid Monte. The proposed model is able to combine the accuracy of the Bayesian MLP model and the transparency of Bayesian rough set model.
Development of a computationally efficient urban modeling approach
DEFF Research Database (Denmark)
Wolfs, Vincent; Murla, Damian; Ntegeka, Victor;
2016-01-01
This paper presents a parsimonious and data-driven modelling approach to simulate urban floods. Flood levels simulated by detailed 1D-2D hydrodynamic models can be emulated using the presented conceptual modelling approach with a very short calculation time. In addition, the model detail can...... be adjust-ed, allowing the modeller to focus on flood-prone locations. This results in efficiently parameterized models that can be tailored to applications. The simulated flood levels are transformed into flood extent maps using a high resolution (0.5-meter) digital terrain model in GIS. To illustrate...... the developed methodology, a case study for the city of Ghent in Belgium is elaborated. The configured conceptual model mimics the flood levels of a detailed 1D-2D hydrodynamic InfoWorks ICM model accurately, while the calculation time is an order of magnitude of 106 times shorter than the original highly...
Implicit moral evaluations: A multinomial modeling approach.
Cameron, C Daryl; Payne, B Keith; Sinnott-Armstrong, Walter; Scheffer, Julian A; Inzlicht, Michael
2017-01-01
Implicit moral evaluations-i.e., immediate, unintentional assessments of the wrongness of actions or persons-play a central role in supporting moral behavior in everyday life. Yet little research has employed methods that rigorously measure individual differences in implicit moral evaluations. In five experiments, we develop a new sequential priming measure-the Moral Categorization Task-and a multinomial model that decomposes judgment on this task into multiple component processes. These include implicit moral evaluations of moral transgression primes (Unintentional Judgment), accurate moral judgments about target actions (Intentional Judgment), and a directional tendency to judge actions as morally wrong (Response Bias). Speeded response deadlines reduced Intentional Judgment but not Unintentional Judgment (Experiment 1). Unintentional Judgment was stronger toward moral transgression primes than non-moral negative primes (Experiments 2-4). Intentional Judgment was associated with increased error-related negativity, a neurophysiological indicator of behavioral control (Experiment 4). Finally, people who voted for an anti-gay marriage amendment had stronger Unintentional Judgment toward gay marriage primes (Experiment 5). Across Experiments 1-4, implicit moral evaluations converged with moral personality: Unintentional Judgment about wrong primes, but not negative primes, was negatively associated with psychopathic tendencies and positively associated with moral identity and guilt proneness. Theoretical and practical applications of formal modeling for moral psychology are discussed. Copyright © 2016 Elsevier B.V. All rights reserved.
Continuous Molecular Fields Approach Applied to Structure-Activity Modeling
Baskin, Igor I
2013-01-01
The Method of Continuous Molecular Fields is a universal approach to predict various properties of chemical compounds, in which molecules are represented by means of continuous fields (such as electrostatic, steric, electron density functions, etc). The essence of the proposed approach consists in performing statistical analysis of functional molecular data by means of joint application of kernel machine learning methods and special kernels which compare molecules by computing overlap integrals of their molecular fields. This approach is an alternative to traditional methods of building 3D structure-activity and structure-property models based on the use of fixed sets of molecular descriptors. The methodology of the approach is described in this chapter, followed by its application to building regression 3D-QSAR models and conducting virtual screening based on one-class classification models. The main directions of the further development of this approach are outlined at the end of the chapter.
Statistical Tests of the PTHA Poisson Assumption for Submarine Landslides
Geist, E. L.; Chaytor, J. D.; Parsons, T.; Ten Brink, U. S.
2012-12-01
We demonstrate that a sequence of dated mass transport deposits (MTDs) can provide information to statistically test whether or not submarine landslides associated with these deposits conform to a Poisson model of occurrence. Probabilistic tsunami hazard analysis (PTHA) most often assumes Poissonian occurrence for all sources, with an exponential distribution of return times. Using dates that define the bounds of individual MTDs, we first describe likelihood and Monte Carlo methods of parameter estimation for a suite of candidate occurrence models (Poisson, lognormal, gamma, Brownian Passage Time). In addition to age-dating uncertainty, both methods incorporate uncertainty caused by the open time intervals: i.e., before the first and after the last event to the present. Accounting for these open intervals is critical when there are a small number of observed events. The optimal occurrence model is selected according to both the Akaike Information Criteria (AIC) and Akaike's Bayesian Information Criterion (ABIC). In addition, the likelihood ratio test can be performed on occurrence models from the same family: e.g., the gamma model relative to the exponential model of return time distribution. Parameter estimation, model selection, and hypothesis testing are performed on data from two IODP holes in the northern Gulf of Mexico that penetrated a total of 14 MTDs, some of which are correlated between the two holes. Each of these events has been assigned an age based on microfossil zonations and magnetostratigraphic datums. Results from these sites indicate that the Poisson assumption is likely valid. However, parameter estimation results using the likelihood method for one of the sites suggest that the events may have occurred quasi-periodically. Methods developed in this study provide tools with which one can determine both the rate of occurrence and the statistical validity of the Poisson assumption when submarine landslides are included in PTHA.
A forward modeling approach for interpreting impeller flow logs.
Parker, Alison H; West, L Jared; Odling, Noelle E; Bown, Richard T
2010-01-01
A rigorous and practical approach for interpretation of impeller flow log data to determine vertical variations in hydraulic conductivity is presented and applied to two well logs from a Chalk aquifer in England. Impeller flow logging involves measuring vertical flow speed in a pumped well and using changes in flow with depth to infer the locations and magnitudes of inflows into the well. However, the measured flow logs are typically noisy, which leads to spurious hydraulic conductivity values where simplistic interpretation approaches are applied. In this study, a new method for interpretation is presented, which first defines a series of physical models for hydraulic conductivity variation with depth and then fits the models to the data, using a regression technique. Some of the models will be rejected as they are physically unrealistic. The best model is then selected from the remaining models using a maximum likelihood approach. This balances model complexity against fit, for example, using Akaike's Information Criterion.
An Adaptive Approach to Schema Classification for Data Warehouse Modeling
Institute of Scientific and Technical Information of China (English)
Hong-Ding Wang; Yun-Hai Tong; Shao-Hua Tan; Shi-Wei Tang; Dong-Qing Yang; Guo-Hui Sun
2007-01-01
Data warehouse (DW) modeling is a complicated task, involving both knowledge of business processes and familiarity with operational information systems structure and behavior. Existing DW modeling techniques suffer from the following major drawbacks -data-driven approach requires high levels of expertise and neglects the requirements of end users, while demand-driven approach lacks enterprise-wide vision and is regardless of existing models of underlying operational systems. In order to make up for those shortcomings, a method of classification of schema elements for DW modeling is proposed in this paper. We first put forward the vector space models for subjects and schema elements, then present an adaptive approach with self-tuning theory to construct context vectors of subjects, and finally classify the source schema elements into different subjects of the DW automatically. Benefited from the result of the schema elements classification, designers can model and construct a DW more easily.
Local conservation scores without a priori assumptions on neutral substitution rates
Directory of Open Access Journals (Sweden)
Hagenauer Joachim
2008-04-01
Full Text Available Abstract Background Comparative genomics aims to detect signals of evolutionary conservation as an indicator of functional constraint. Surprisingly, results of the ENCODE project revealed that about half of the experimentally verified functional elements found in non-coding DNA were classified as unconstrained by computational predictions. Following this observation, it has been hypothesized that this may be partly explained by biased estimates on neutral evolutionary rates used by existing sequence conservation metrics. All methods we are aware of rely on a comparison with the neutral rate and conservation is estimated by measuring the deviation of a particular genomic region from this rate. Consequently, it is a reasonable assumption that inaccurate neutral rate estimates may lead to biased conservation and constraint estimates. Results We propose a conservation signal that is produced by local Maximum Likelihood estimation of evolutionary parameters using an optimized sliding window and present a Kullback-Leibler projection that allows multiple different estimated parameters to be transformed into a conservation measure. This conservation measure does not rely on assumptions about neutral evolutionary substitution rates and little a priori assumptions on the properties of the conserved regions are imposed. We show the accuracy of our approach (KuLCons on synthetic data and compare it to the scores generated by state-of-the-art methods (phastCons, GERP, SCONE in an ENCODE region. We find that KuLCons is most often in agreement with the conservation/constraint signatures detected by GERP and SCONE while qualitatively very different patterns from phastCons are observed. Opposed to standard methods KuLCons can be extended to more complex evolutionary models, e.g. taking insertion and deletion events into account and corresponding results show that scores obtained under this model can diverge significantly from scores using the simpler model
A Networks Approach to Modeling Enzymatic Reactions.
Imhof, P
2016-01-01
Modeling enzymatic reactions is a demanding task due to the complexity of the system, the many degrees of freedom involved and the complex, chemical, and conformational transitions associated with the reaction. Consequently, enzymatic reactions are not determined by precisely one reaction pathway. Hence, it is beneficial to obtain a comprehensive picture of possible reaction paths and competing mechanisms. By combining individually generated intermediate states and chemical transition steps a network of such pathways can be constructed. Transition networks are a discretized representation of a potential energy landscape consisting of a multitude of reaction pathways connecting the end states of the reaction. The graph structure of the network allows an easy identification of the energetically most favorable pathways as well as a number of alternative routes.
Genetic Algorithm Approaches to Prebiobiotic Chemistry Modeling
Lohn, Jason; Colombano, Silvano
1997-01-01
We model an artificial chemistry comprised of interacting polymers by specifying two initial conditions: a distribution of polymers and a fixed set of reversible catalytic reactions. A genetic algorithm is used to find a set of reactions that exhibit a desired dynamical behavior. Such a technique is useful because it allows an investigator to determine whether a specific pattern of dynamics can be produced, and if it can, the reaction network found can be then analyzed. We present our results in the context of studying simplified chemical dynamics in theorized protocells - hypothesized precursors of the first living organisms. Our results show that given a small sample of plausible protocell reaction dynamics, catalytic reaction sets can be found. We present cases where this is not possible and also analyze the evolved reaction sets.
Modeling Approaches for Describing Microbial Population Heterogeneity
DEFF Research Database (Denmark)
Lencastre Fernandes, Rita
, ethanol and biomass throughout the reactor. This work has proven that the integration of CFD and population balance models, for describing the growth of a microbial population in a spatially heterogeneous reactor, is feasible, and that valuable insight on the interplay between flow and the dynamics......Although microbial populations are typically described by averaged properties, individual cells present a certain degree of variability. Indeed, initially clonal microbial populations develop into heterogeneous populations, even when growing in a homogeneous environment. A heterogeneous microbial......) to predict distributions of certain population properties including particle size, mass or volume, and molecular weight. Similarly, PBM allow for a mathematical description of distributed cell properties within microbial populations. Cell total protein content distributions (a measure of cell mass) have been...
Hamiltonian approach to hybrid plasma models
Tronci, Cesare
2010-01-01
The Hamiltonian structures of several hybrid kinetic-fluid models are identified explicitly, upon considering collisionless Vlasov dynamics for the hot particles interacting with a bulk fluid. After presenting different pressure-coupling schemes for an ordinary fluid interacting with a hot gas, the paper extends the treatment to account for a fluid plasma interacting with an energetic ion species. Both current-coupling and pressure-coupling MHD schemes are treated extensively. In particular, pressure-coupling schemes are shown to require a transport-like term in the Vlasov kinetic equation, in order for the Hamiltonian structure to be preserved. The last part of the paper is devoted to studying the more general case of an energetic ion species interacting with a neutralizing electron background (hybrid Hall-MHD). Circulation laws and Casimir functionals are presented explicitly in each case.
Modeling of phase equilibria with CPA using the homomorph approach
DEFF Research Database (Denmark)
Breil, Martin Peter; Tsivintzelis, Ioannis; Kontogeorgis, Georgios
2011-01-01
For association models, like CPA and SAFT, a classical approach is often used for estimating pure-compound and mixture parameters. According to this approach, the pure-compound parameters are estimated from vapor pressure and liquid density data. Then, the binary interaction parameters, kij, are ...
A Constructive Neural-Network Approach to Modeling Psychological Development
Shultz, Thomas R.
2012-01-01
This article reviews a particular computational modeling approach to the study of psychological development--that of constructive neural networks. This approach is applied to a variety of developmental domains and issues, including Piagetian tasks, shift learning, language acquisition, number comparison, habituation of visual attention, concept…
A Constructive Neural-Network Approach to Modeling Psychological Development
Shultz, Thomas R.
2012-01-01
This article reviews a particular computational modeling approach to the study of psychological development--that of constructive neural networks. This approach is applied to a variety of developmental domains and issues, including Piagetian tasks, shift learning, language acquisition, number comparison, habituation of visual attention, concept…
Modular Modelling and Simulation Approach - Applied to Refrigeration Systems
DEFF Research Database (Denmark)
Sørensen, Kresten Kjær; Stoustrup, Jakob
2008-01-01
This paper presents an approach to modelling and simulation of the thermal dynamics of a refrigeration system, specifically a reefer container. A modular approach is used and the objective is to increase the speed and flexibility of the developed simulation environment. The refrigeration system...
Pattern-based approach for logical traffic isolation forensic modelling
CSIR Research Space (South Africa)
Dlamini, I
2009-08-01
Full Text Available The use of design patterns usually changes the approach of software design and makes software development relatively easy. This paper extends work on a forensic model for Logical Traffic Isolation (LTI) based on Differentiated Services (Diff...
A semantic-web approach for modeling computing infrastructures
M. Ghijsen; J. van der Ham; P. Grosso; C. Dumitru; H. Zhu; Z. Zhao; C. de Laat
2013-01-01
This paper describes our approach to modeling computing infrastructures. Our main contribution is the Infrastructure and Network Description Language (INDL) ontology. The aim of INDL is to provide technology independent descriptions of computing infrastructures, including the physical resources as w
Bayesian approach to decompression sickness model parameter estimation.
Howle, L E; Weber, P W; Nichols, J M
2017-03-01
We examine both maximum likelihood and Bayesian approaches for estimating probabilistic decompression sickness model parameters. Maximum likelihood estimation treats parameters as fixed values and determines the best estimate through repeated trials, whereas the Bayesian approach treats parameters as random variables and determines the parameter probability distributions. We would ultimately like to know the probability that a parameter lies in a certain range rather than simply make statements about the repeatability of our estimator. Although both represent powerful methods of inference, for models with complex or multi-peaked likelihoods, maximum likelihood parameter estimates can prove more difficult to interpret than the estimates of the parameter distributions provided by the Bayesian approach. For models of decompression sickness, we show that while these two estimation methods are complementary, the credible intervals generated by the Bayesian approach are more naturally suited to quantifying uncertainty in the model parameters.
Linear irreversible heat engines based on local equilibrium assumptions
Izumida, Yuki; Okuda, Koji
2015-08-01
We formulate an endoreversible finite-time Carnot cycle model based on the assumptions of local equilibrium and constant energy flux, where the efficiency and the power are expressed in terms of the thermodynamic variables of the working substance. By analyzing the entropy production rate caused by the heat transfer in each isothermal process during the cycle, and using the endoreversible condition applied to the linear response regime, we identify the thermodynamic flux and force of the present system and obtain a linear relation that connects them. We calculate the efficiency at maximum power in the linear response regime by using the linear relation, which agrees with the Curzon-Ahlborn (CA) efficiency known as the upper bound in this regime. This reason is also elucidated by rewriting our model into the form of the Onsager relations, where our model turns out to satisfy the tight-coupling condition leading to the CA efficiency.
Modelling road accidents: An approach using structural time series
Junus, Noor Wahida Md; Ismail, Mohd Tahir
2014-09-01
In this paper, the trend of road accidents in Malaysia for the years 2001 until 2012 was modelled using a structural time series approach. The structural time series model was identified using a stepwise method, and the residuals for each model were tested. The best-fitted model was chosen based on the smallest Akaike Information Criterion (AIC) and prediction error variance. In order to check the quality of the model, a data validation procedure was performed by predicting the monthly number of road accidents for the year 2012. Results indicate that the best specification of the structural time series model to represent road accidents is the local level with a seasonal model.
Tests of the frozen-flux and tangentially geostrophic assumptions using magnetic satellite data
DEFF Research Database (Denmark)
Chulliat, A.; Olsen, Nils; Sabaka, T.
the very large number of flows explaining the observed secular variation under the frozen-flux assumption alone. More recently, it has been shown that the combined frozen-flux and tangentially geostrophic assumptions translate into constraints on the secular variation whose mathematics are now well...... understood. Using these constraints, we test the combined frozen-flux and tangentially geostrophic assumptions against recent, high-precision magnetic data provided by the and CHAMP satellites. The methodology involves building constrained field models using least-squares methods. Two types of models...
Functional state modelling approach validation for yeast and bacteria cultivations
Roeva, Olympia; Pencheva, Tania
2014-01-01
In this paper, the functional state modelling approach is validated for modelling of the cultivation of two different microorganisms: yeast (Saccharomyces cerevisiae) and bacteria (Escherichia coli). Based on the available experimental data for these fed-batch cultivation processes, three different functional states are distinguished, namely primary product synthesis state, mixed oxidative state and secondary product synthesis state. Parameter identification procedures for different local models are performed using genetic algorithms. The simulation results show high degree of adequacy of the models describing these functional states for both S. cerevisiae and E. coli cultivations. Thus, the local models are validated for the cultivation of both microorganisms. This fact is a strong structure model verification of the functional state modelling theory not only for a set of yeast cultivations, but also for bacteria cultivation. As such, the obtained results demonstrate the efficiency and efficacy of the functional state modelling approach. PMID:26740778
An optimization approach to kinetic model reduction for combustion chemistry
Lebiedz, Dirk
2013-01-01
Model reduction methods are relevant when the computation time of a full convection-diffusion-reaction simulation based on detailed chemical reaction mechanisms is too large. In this article, we review a model reduction approach based on optimization of trajectories and show its applicability to realistic combustion models. As most model reduction methods, it identifies points on a slow invariant manifold based on time scale separation in the dynamics of the reaction system. The numerical approximation of points on the manifold is achieved by solving a semi-infinite optimization problem, where the dynamics enter the problem as constraints. The proof of existence of a solution for an arbitrarily chosen dimension of the reduced model (slow manifold) is extended to the case of realistic combustion models including thermochemistry by considering the properties of proper maps. The model reduction approach is finally applied to three models based on realistic reaction mechanisms: 1. ozone decomposition as a small t...
Functional state modelling approach validation for yeast and bacteria cultivations.
Roeva, Olympia; Pencheva, Tania
2014-09-03
In this paper, the functional state modelling approach is validated for modelling of the cultivation of two different microorganisms: yeast (Saccharomyces cerevisiae) and bacteria (Escherichia coli). Based on the available experimental data for these fed-batch cultivation processes, three different functional states are distinguished, namely primary product synthesis state, mixed oxidative state and secondary product synthesis state. Parameter identification procedures for different local models are performed using genetic algorithms. The simulation results show high degree of adequacy of the models describing these functional states for both S. cerevisiae and E. coli cultivations. Thus, the local models are validated for the cultivation of both microorganisms. This fact is a strong structure model verification of the functional state modelling theory not only for a set of yeast cultivations, but also for bacteria cultivation. As such, the obtained results demonstrate the efficiency and efficacy of the functional state modelling approach.
A Bayesian Approach for Summarizing and Modeling Time-Series Exposure Data with Left Censoring.
Houseman, E Andres; Virji, M Abbas
2017-08-01
Direct reading instruments are valuable tools for measuring exposure as they provide real-time measurements for rapid decision making. However, their use is limited to general survey applications in part due to issues related to their performance. Moreover, statistical analysis of real-time data is complicated by autocorrelation among successive measurements, non-stationary time series, and the presence of left-censoring due to limit-of-detection (LOD). A Bayesian framework is proposed that accounts for non-stationary autocorrelation and LOD issues in exposure time-series data in order to model workplace factors that affect exposure and estimate summary statistics for tasks or other covariates of interest. A spline-based approach is used to model non-stationary autocorrelation with relatively few assumptions about autocorrelation structure. Left-censoring is addressed by integrating over the left tail of the distribution. The model is fit using Markov-Chain Monte Carlo within a Bayesian paradigm. The method can flexibly account for hierarchical relationships, random effects and fixed effects of covariates. The method is implemented using the rjags package in R, and is illustrated by applying it to real-time exposure data. Estimates for task means and covariates from the Bayesian model are compared to those from conventional frequentist models including linear regression, mixed-effects, and time-series models with different autocorrelation structures. Simulations studies are also conducted to evaluate method performance. Simulation studies with percent of measurements below the LOD ranging from 0 to 50% showed lowest root mean squared errors for task means and the least biased standard deviations from the Bayesian model compared to the frequentist models across all levels of LOD. In the application, task means from the Bayesian model were similar to means from the frequentist models, while the standard deviations were different. Parameter estimates for covariates
Molecular Modeling Approach to Cardiovascular Disease Targetting
Directory of Open Access Journals (Sweden)
Chandra Sekhar Akula,
2010-05-01
Full Text Available Cardiovascular disease, including stroke, is the leading cause of illness and death in the India. A number of studies have shown that inflammation of blood vessels is one of the major factors that increase the incidence of heart diseases, including arteriosclerosis (clogging of the arteries, stroke and myocardial infraction or heart attack. Studies have associated obesity and other components of metabolic syndrome, cardiovascular risk factors, with lowgradeinflammation. Furthermore, some findings suggest that drugs commonly prescribed to the lower cholesterol also reduce this inflammation, suggesting an additional beneficial effect of the stains. The recent development of angiotensin 11 (Ang11 receptor antagonists has enabled to improve significantly the tolerability profile of thisgroup of drugs while maintaining a high clinical efficacy. ACE2 is expressed predominantly in the endothelium and in renal tubular epithelium, and it thus may be an import new cardiovascular target. In the present study we modeled the structure of ACE and designed an inhibitor through using ARGUS lab and the validation of the Drug molecule is done basing on QSAR properties and Cache for this protein through CADD.
Virtuous organization: A structural equation modeling approach
Directory of Open Access Journals (Sweden)
Majid Zamahani
2013-02-01
Full Text Available For years, the idea of virtue was unfavorable among researchers and virtues were traditionally considered as culture-specific, relativistic and they were supposed to be associated with social conservatism, religious or moral dogmatism, and scientific irrelevance. Virtue and virtuousness have been recently considered seriously among organizational researchers. The proposed study of this paper examines the relationships between leadership, organizational culture, human resource, structure and processes, care for community and virtuous organization. Structural equation modeling is employed to investigate the effects of each variable on other components. The data used in this study consists of questionnaire responses from employees in Payam e Noor University in Yazd province. A total of 250 questionnaires were sent out and a total of 211 valid responses were received. Our results have revealed that all the five variables have positive and significant impacts on virtuous organization. Among the five variables, organizational culture has the most direct impact (0.80 and human resource has the most total impact (0.844 on virtuous organization.
Kriegler, E.; Edmonds, J.; Hallegatte, S.; Ebi, K.L.; Kram, T.; Riahi, K.; Winkler, J.; van Vuuren, Detlef|info:eu-repo/dai/nl/11522016X
2014-01-01
The new scenario framework facilitates the coupling of multiple socioeconomic reference pathways with climate model products using the representative concentration pathways. This will allow for improved assessment of climate impacts, adaptation and mitigation. Assumptions about climate policy play a
Lyn, Jennifer A; Ramsey, Michael H; Damant, Andrew P; Wood, Roger
2007-12-01
Measurement uncertainty is a vital issue within analytical science. There are strong arguments that primary sampling should be considered the first and perhaps the most influential step in the measurement process. Increasingly, analytical laboratories are required to report measurement results to clients together with estimates of the uncertainty. Furthermore, these estimates can be used when pursuing regulation enforcement to decide whether a measured analyte concentration is above a threshold value. With its recognised importance in analytical measurement, the question arises of 'what is the most appropriate method to estimate the measurement uncertainty?'. Two broad methods for uncertainty estimation are identified, the modelling method and the empirical method. In modelling, the estimation of uncertainty involves the identification, quantification and summation (as variances) of each potential source of uncertainty. This approach has been applied to purely analytical systems, but becomes increasingly problematic in identifying all of such sources when it is applied to primary sampling. Applications of this methodology to sampling often utilise long-established theoretical models of sampling and adopt the assumption that a 'correct' sampling protocol will ensure a representative sample. The empirical approach to uncertainty estimation involves replicated measurements from either inter-organisational trials and/or internal method validation and quality control. A more simple method involves duplicating sampling and analysis, by one organisation, for a small proportion of the total number of samples. This has proven to be a suitable alternative to these often expensive and time-consuming trials, in routine surveillance and one-off surveys, especially where heterogeneity is the main source of uncertainty. A case study of aflatoxins in pistachio nuts is used to broadly demonstrate the strengths and weakness of the two methods of uncertainty estimation. The estimate
DDH-like Assumptions Based on Extension Rings
DEFF Research Database (Denmark)
Cramer, Ronald; Damgård, Ivan Bjerre; Kiltz, Eike;
2011-01-01
generalized to use instead d-DDH, and we show in the generic group model that d-DDH is harder than DDH. This means that virtually any application of DDH can now be realized with the same (amortized) efficiency, but under a potentially weaker assumption. On the negative side, we also show that d-DDH, just like...... DDH, is easy in bilinear groups. This motivates our suggestion of a different type of assumption, the d-vector DDH problems (VDDH), which are based on f(X)= X^d, but with a twist to avoid the problems with reducible polynomials. We show in the generic group model that VDDH is hard in bilinear groups...... and that in fact the problems become harder with increasing d and hence form an infinite hierarchy. We show that hardness of VDDH implies CCA-secure encryption, efficient Naor-Reingold style pseudorandom functions, and auxiliary input secure encryption, a strong form of leakage resilience. This can be seen...
Ernst, Anja F.
2017-01-01
Misconceptions about the assumptions behind the standard linear regression model are widespread and dangerous. These lead to using linear regression when inappropriate, and to employing alternative procedures with less statistical power when unnecessary. Our systematic literature review investigated employment and reporting of assumption checks in twelve clinical psychology journals. Findings indicate that normality of the variables themselves, rather than of the errors, was wrongfully held for a necessary assumption in 4% of papers that use regression. Furthermore, 92% of all papers using linear regression were unclear about their assumption checks, violating APA-recommendations. This paper appeals for a heightened awareness for and increased transparency in the reporting of statistical assumption checking. PMID:28533971
Data Analysis A Model Comparison Approach, Second Edition
Judd, Charles M; Ryan, Carey S
2008-01-01
This completely rewritten classic text features many new examples, insights and topics including mediational, categorical, and multilevel models. Substantially reorganized, this edition provides a briefer, more streamlined examination of data analysis. Noted for its model-comparison approach and unified framework based on the general linear model, the book provides readers with a greater understanding of a variety of statistical procedures. This consistent framework, including consistent vocabulary and notation, is used throughout to develop fewer but more powerful model building techniques. T
Zimmer, Christoph; Sahle, Sven
2016-04-01
Parameter estimation for models with intrinsic stochasticity poses specific challenges that do not exist for deterministic models. Therefore, specialized numerical methods for parameter estimation in stochastic models have been developed. Here, we study whether dedicated algorithms for stochastic models are indeed superior to the naive approach of applying the readily available least squares algorithm designed for deterministic models. We compare the performance of the recently developed multiple shooting for stochastic systems (MSS) method designed for parameter estimation in stochastic models, a stochastic differential equations based Bayesian approach and a chemical master equation based techniques with the least squares approach for parameter estimation in models of ordinary differential equations (ODE). As test data, 1000 realizations of the stochastic models are simulated. For each realization an estimation is performed with each method, resulting in 1000 estimates for each approach. These are compared with respect to their deviation to the true parameter and, for the genetic toggle switch, also their ability to reproduce the symmetry of the switching behavior. Results are shown for different set of parameter values of a genetic toggle switch leading to symmetric and asymmetric switching behavior as well as an immigration-death and a susceptible-infected-recovered model. This comparison shows that it is important to choose a parameter estimation technique that can treat intrinsic stochasticity and that the specific choice of this algorithm shows only minor performance differences.
Modelling and Generating Ajax Applications: A Model-Driven Approach
Gharavi, V.; Mesbah, A.; Van Deursen, A.
2008-01-01
Preprint of paper published in: IWWOST 2008 - 7th International Workshop on Web-Oriented Software Technologies, 14-15 July 2008 AJAX is a promising and rapidly evolving approach for building highly interactive web applications. In AJAX, user interface components and the event-based interaction betw
Modelling and Generating Ajax Applications: A Model-Driven Approach
Gharavi, V.; Mesbah, A.; Van Deursen, A.
2008-01-01
Preprint of paper published in: IWWOST 2008 - 7th International Workshop on Web-Oriented Software Technologies, 14-15 July 2008 AJAX is a promising and rapidly evolving approach for building highly interactive web applications. In AJAX, user interface components and the event-based interaction
Semi-Supervised Transductive Hot Spot Predictor Working on Multiple Assumptions
Wang, Jim Jing-Yan
2014-05-23
Protein-protein interactions are critically dependent on just a few residues (“hot spots”) at the interfaces. Hot spots make a dominant contribution to the binding free energy and if mutated they can disrupt the interaction. As mutagenesis studies require significant experimental efforts, there exists a need for accurate and reliable computational hot spot prediction methods. Compared to the supervised hot spot prediction algorithms, the semi-supervised prediction methods can take into consideration both the labeled and unlabeled residues in the dataset during the prediction procedure. The transductive support vector machine has been utilized for this task and demonstrated a better prediction performance. To the best of our knowledge, however, none of the transductive semi-supervised algorithms takes all the three semisupervised assumptions, i.e., smoothness, cluster and manifold assumptions, together into account during learning. In this paper, we propose a novel semi-supervised method for hot spot residue prediction, by considering all the three semisupervised assumptions using nonlinear models. Our algorithm, IterPropMCS, works in an iterative manner. In each iteration, the algorithm first propagates the labels of the labeled residues to the unlabeled ones, along the shortest path between them on a graph, assuming that they lie on a nonlinear manifold. Then it selects the most confident residues as the labeled ones for the next iteration, according to the cluster and smoothness criteria, which is implemented by a nonlinear density estimator. Experiments on a benchmark dataset, using protein structure-based features, demonstrate that our approach is effective in predicting hot spots and compares favorably to other available methods. The results also show that our method outperforms the state-of-the-art transductive learning methods.
A novel approach to modeling and diagnosing the cardiovascular system
Energy Technology Data Exchange (ETDEWEB)
Keller, P.E.; Kangas, L.J.; Hashem, S.; Kouzes, R.T. [Pacific Northwest Lab., Richland, WA (United States); Allen, P.A. [Life Link, Richland, WA (United States)
1995-07-01
A novel approach to modeling and diagnosing the cardiovascular system is introduced. A model exhibits a subset of the dynamics of the cardiovascular behavior of an individual by using a recurrent artificial neural network. Potentially, a model will be incorporated into a cardiovascular diagnostic system. This approach is unique in that each cardiovascular model is developed from physiological measurements of an individual. Any differences between the modeled variables and the variables of an individual at a given time are used for diagnosis. This approach also exploits sensor fusion to optimize the utilization of biomedical sensors. The advantage of sensor fusion has been demonstrated in applications including control and diagnostics of mechanical and chemical processes.
Moreno-Betancur, M; Chavance, M
2016-08-01
Statistical analyses of longitudinal data with drop-outs based on direct likelihood, and using all the available data, provide unbiased and fully efficient estimates under some assumptions about the drop-out mechanism. Unfortunately, these assumptions can never be tested from the data. Thus, sensitivity analyses should be routinely performed to assess the robustness of inferences to departures from these assumptions. However, each specific scientific context requires different considerations when setting up such an analysis, no standard method exists and this is still an active area of research. We propose a flexible procedure to perform sensitivity analyses when dealing with continuous outcomes, which are described by a linear mixed model in an initial likelihood analysis. The methodology relies on the pattern-mixture model factorisation of the full data likelihood and was validated in a simulation study. The approach was prompted by a randomised clinical trial for sleep-maintenance insomnia treatment. This case study illustrated the practical value of our approach and underlined the need for sensitivity analyses when analysing data with drop-outs: some of the conclusions from the initial analysis were shown to be reliable, while others were found to be fragile and strongly dependent on modelling assumptions. R code for implementation is provided.
The zero-sum assumption in neutral biodiversity theory
Etienne, Rampal S.; Alonso, David; McKane, Alan J.
2007-01-01
The neutral theory of biodiversity as put forward by Hubbell in his 2001 monograph has received much criticism for its unrealistic simplifying assumptions. These are the assumptions of functional equivalence among different species (neutrality), the assumption of point mutation speciation, and the a
The zero-sum assumption in neutral biodiversity theory
Etienne, R.S.; Alonso, D.; McKane, A.J.
2007-01-01
The neutral theory of biodiversity as put forward by Hubbell in his 2001 monograph has received much criticism for its unrealistic simplifying assumptions. These are the assumptions of functional equivalence among different species (neutrality), the assumption of point mutation speciation, and the
Philosophy of Technology Assumptions in Educational Technology Leadership
Webster, Mark David
2017-01-01
A qualitative study using grounded theory methods was conducted to (a) examine what philosophy of technology assumptions are present in the thinking of K-12 technology leaders, (b) investigate how the assumptions may influence technology decision making, and (c) explore whether technological determinist assumptions are present. Subjects involved…
Philosophy of Technology Assumptions in Educational Technology Leadership
Webster, Mark David
2017-01-01
A qualitative study using grounded theory methods was conducted to (a) examine what philosophy of technology assumptions are present in the thinking of K-12 technology leaders, (b) investigate how the assumptions may influence technology decision making, and (c) explore whether technological determinist assumptions are present. Subjects involved…
Implicit Assumptions in High Potentials Recruitment
Posthumus, Jan; Bozer, Gil; Santora, Joseph C.
2016-01-01
Purpose: Professionals of human resources (HR) use different criteria in practice than they verbalize. Thus, the aim of this research was to identify the implicit criteria used for the selection of high-potential employees in recruitment and development settings in the pharmaceutical industry. Design/methodology/approach: A semi-structured…
What lies beneath: underlying assumptions in bioimage analysis.
Pridmore, Tony P; French, Andrew P; Pound, Michael P
2012-12-01
The need for plant image analysis tools is established and has led to a steadily expanding literature and set of software tools. This is encouraging, but raises a question: how does a plant scientist with no detailed knowledge or experience of image analysis methods choose the right tool(s) for the task at hand, or satisfy themselves that a suggested approach is appropriate? We believe that too great an emphasis is currently being placed on low-level mechanisms and software environments. In this opinion article we propose that a renewed focus on the core theories and algorithms used, and in particular the assumptions upon which they rely, will better equip plant scientists to evaluate the available resources. Copyright © 2012 Elsevier Ltd. All rights reserved.
New media in strategy – mapping assumptions in the field
DEFF Research Database (Denmark)
Gulbrandsen, Ib Tunby; Plesner, Ursula; Raviola, Elena
2017-01-01
in relation to the outside or the inside of the organization. After discussing the literature according to these dimensions (deterministic/volontaristic) and (internal/external), the article argues for a sociomaterial approach to strategy and strategy making and for using the concept of affordances......There is plenty of empirical evidence for claiming that new media make a difference for how strategy is conceived and executed. Furthermore, there is a rapidly growing body of literature that engages with this theme, and offers recommendations regarding the appropriate strategic actions in relation...... to new media. By contrast, there is relatively little attention to the assumptions behind strategic thinking in relation to new media. This article reviews the most influential strategy journals, asking how new media are conceptualized. It is shown that strategy scholars have a tendency to place...
Mathematical models for therapeutic approaches to control HIV disease transmission
Roy, Priti Kumar
2015-01-01
The book discusses different therapeutic approaches based on different mathematical models to control the HIV/AIDS disease transmission. It uses clinical data, collected from different cited sources, to formulate the deterministic as well as stochastic mathematical models of HIV/AIDS. It provides complementary approaches, from deterministic and stochastic points of view, to optimal control strategy with perfect drug adherence and also tries to seek viewpoints of the same issue from different angles with various mathematical models to computer simulations. The book presents essential methods and techniques for students who are interested in designing epidemiological models on HIV/AIDS. It also guides research scientists, working in the periphery of mathematical modeling, and helps them to explore a hypothetical method by examining its consequences in the form of a mathematical modelling and making some scientific predictions. The model equations, mathematical analysis and several numerical simulations that are...
Asteroid modeling for testing spacecraft approach and landing.
Martin, Iain; Parkes, Steve; Dunstan, Martin; Rowell, Nick
2014-01-01
Spacecraft exploration of asteroids presents autonomous-navigation challenges that can be aided by virtual models to test and develop guidance and hazard-avoidance systems. Researchers have extended and applied graphics techniques to create high-resolution asteroid models to simulate cameras and other spacecraft sensors approaching and descending toward asteroids. A scalable model structure with evenly spaced vertices simplifies terrain modeling, avoids distortion at the poles, and enables triangle-strip definition for efficient rendering. To create the base asteroid models, this approach uses two-phase Poisson faulting and Perlin noise. It creates realistic asteroid surfaces by adding both crater models adapted from lunar terrain simulation and multiresolution boulders. The researchers evaluated the virtual asteroids by comparing them with real asteroid images, examining the slope distributions, and applying a surface-relative feature-tracking algorithm to the models.
A model-driven approach to information security compliance
Correia, Anacleto; Gonçalves, António; Teodoro, M. Filomena
2017-06-01
The availability, integrity and confidentiality of information are fundamental to the long-term survival of any organization. Information security is a complex issue that must be holistically approached, combining assets that support corporate systems, in an extended network of business partners, vendors, customers and other stakeholders. This paper addresses the conception and implementation of information security systems, conform the ISO/IEC 27000 set of standards, using the model-driven approach. The process begins with the conception of a domain level model (computation independent model) based on information security vocabulary present in the ISO/IEC 27001 standard. Based on this model, after embedding in the model mandatory rules for attaining ISO/IEC 27001 conformance, a platform independent model is derived. Finally, a platform specific model serves the base for testing the compliance of information security systems with the ISO/IEC 27000 set of standards.
Heuristic approaches to models and modeling in systems biology
MacLeod, Miles
2016-01-01
Prediction and control sufficient for reliable medical and other interventions are prominent aims of modeling in systems biology. The short-term attainment of these goals has played a strong role in projecting the importance and value of the field. In this paper I identify the standard models must m
A Model Management Approach for Co-Simulation Model Evaluation
Zhang, X.C.; Broenink, Johannes F.; Filipe, Joaquim; Kacprzyk, Janusz; Pina, Nuno
2011-01-01
Simulating formal models is a common means for validating the correctness of the system design and reduce the time-to-market. In most of the embedded control system design, multiple engineering disciplines and various domain-specific models are often involved, such as mechanical, control, software
A New Detection Approach Based on the Maximum Entropy Model
Institute of Scientific and Technical Information of China (English)
DONG Xiaomei; XIANG Guang; YU Ge; LI Xiaohua
2006-01-01
The maximum entropy model was introduced and a new intrusion detection approach based on the maximum entropy model was proposed. The vector space model was adopted for data presentation. The minimal entropy partitioning method was utilized for attribute discretization. Experiments on the KDD CUP 1999 standard data set were designed and the experimental results were shown. The receiver operating characteristic(ROC) curve analysis approach was utilized to analyze the experimental results. The analysis results show that the proposed approach is comparable to those based on support vector machine(SVM) and outperforms those based on C4.5 and Naive Bayes classifiers. According to the overall evaluation result, the proposed approach is a little better than those based on SVM.
LEXICAL APPROACH IN TEACHING TURKISH: A COLLOCATIONAL STUDY MODEL
Directory of Open Access Journals (Sweden)
Eser ÖRDEM
2013-06-01
Full Text Available Abstract This study intends to propose Lexical Approach (Lewis, 1998, 2002; Harwood, 2002 and a model for teaching Turkish as a foreign language so that this model can be used in classroom settings. This model was created by the researcher as a result of the studies carried out in applied linguistics (Hill, 20009 and memory (Murphy, 2004. Since one of the main problems of foreign language learners is to retrieve what they have learnt, Lewis (1998 and Wray (2008 assume that lexical approach is an alternative explanation to solve this problem.Unlike grammar translation method, this approach supports the idea that language is not composed of general grammar but strings of word and word combinations.In addition, lexical approach posits the idea that each word has tiw gramamtical properties, and therefore each dictionary is a potential grammar book. Foreign language learners can learn to use collocations, a basic principle of Lexical approach. Thus, learners can increase the level of retention.The concept of retrieval clue (Murphy, 2004 is considered the main element in this collocational study model because the main purpose of this model is boost fluency and help learners gain native-like accuracy while producing the target language. Keywords: Foreign language teaching, lexical approach, collocations, retrieval clue
A Model-Driven Approach for Telecommunications Network Services Definition
Chiprianov, Vanea; Kermarrec, Yvon; Alff, Patrick D.
Present day Telecommunications market imposes a short concept-to-market time for service providers. To reduce it, we propose a computer-aided, model-driven, service-specific tool, with support for collaborative work and for checking properties on models. We started by defining a prototype of the Meta-model (MM) of the service domain. Using this prototype, we defined a simple graphical modeling language specific for service designers. We are currently enlarging the MM of the domain using model transformations from Network Abstractions Layers (NALs). In the future, we will investigate approaches to ensure the support for collaborative work and for checking properties on models.
Guillaume, Joseph H. A.; El Sawah, Sondoss
2014-06-01
Sustainable groundwater resource management can only be achieved if planning processes address the basic dynamics of the groundwater system. Conceptual and distributed groundwater models do not necessarily translate into an understanding of how a plan might operate in reality. Prompted by Australian experiences, `iterative closed-question modelling' has been used to develop a process of iterative dialogue about management options, objectives and knowledge. Simple hypothetical models of basic system dynamics that satisfy agreed assumptions are used to stress-test the ability of a proposed management plan to achieve desired future conditions. Participants learn from models in which a plan succeeds and fails, updating their assumptions, expectations or plan. Their new understanding is tested against further hypothetical models. The models act as intellectual devices that confront users with new scenarios to discuss. This theoretical approach is illustrated using simple one and two-cell groundwater models that convey basic notions of capture and spatial impacts of pumping. Simple extensions can address uncertain climate, managed-aquifer recharge and alternate water sources. Having learnt to address the dynamics captured by these models, participants may be better placed to address local conditions and develop more effective arrangements to achieve management outcomes.
Child human model development: a hybrid validation approach
Forbes, P.A.; Rooij, L. van; Rodarius, C.; Crandall, J.
2008-01-01
The current study presents a development and validation approach of a child human body model that will help understand child impact injuries and improve the biofidelity of child anthropometric test devices. Due to the lack of fundamental child biomechanical data needed to fully develop such models a
Modeling Alaska boreal forests with a controlled trend surface approach
Mo Zhou; Jingjing Liang
2012-01-01
An approach of Controlled Trend Surface was proposed to simultaneously take into consideration large-scale spatial trends and nonspatial effects. A geospatial model of the Alaska boreal forest was developed from 446 permanent sample plots, which addressed large-scale spatial trends in recruitment, diameter growth, and mortality. The model was tested on two sets of...
Teaching Service Modelling to a Mixed Class: An Integrated Approach
Deng, Jeremiah D.; Purvis, Martin K.
2015-01-01
Service modelling has become an increasingly important area in today's telecommunications and information systems practice. We have adapted a Network Design course in order to teach service modelling to a mixed class of both the telecommunication engineering and information systems backgrounds. An integrated approach engaging mathematics teaching…
Gray-box modelling approach for description of storage tunnel
DEFF Research Database (Denmark)
Harremoës, Poul; Carstensen, Jacob
1999-01-01
The dynamics of a storage tunnel is examined using a model based on on-line measured data and a combination of simple deterministic and black-box stochastic elements. This approach, called gray-box modeling, is a new promising methodology for giving an on-line state description of sewer systems...
Child human model development: a hybrid validation approach
Forbes, P.A.; Rooij, L. van; Rodarius, C.; Crandall, J.
2008-01-01
The current study presents a development and validation approach of a child human body model that will help understand child impact injuries and improve the biofidelity of child anthropometric test devices. Due to the lack of fundamental child biomechanical data needed to fully develop such models a
Refining the Committee Approach and Uncertainty Prediction in Hydrological Modelling
Kayastha, N.
2014-01-01
Due to the complexity of hydrological systems a single model may be unable to capture the full range of a catchment response and accurately predict the streamflows. The multi modelling approach opens up possibilities for handling such difficulties and allows improve the predictive capability of mode
Refining the committee approach and uncertainty prediction in hydrological modelling
Kayastha, N.
2014-01-01
Due to the complexity of hydrological systems a single model may be unable to capture the full range of a catchment response and accurately predict the streamflows. The multi modelling approach opens up possibilities for handling such difficulties and allows improve the predictive capability of mode
Modelling diversity in building occupant behaviour: a novel statistical approach
DEFF Research Database (Denmark)
Haldi, Frédéric; Calì, Davide; Andersen, Rune Korsholm
2016-01-01
We propose an advanced modelling framework to predict the scope and effects of behavioural diversity regarding building occupant actions on window openings, shading devices and lighting. We develop a statistical approach based on generalised linear mixed models to account for the longitudinal nat...
Asteroid fragmentation approaches for modeling atmospheric energy deposition
Register, Paul J.; Mathias, Donovan L.; Wheeler, Lorien F.
2017-03-01
During asteroid entry, energy is deposited in the atmosphere through thermal ablation and momentum-loss due to aerodynamic drag. Analytic models of asteroid entry and breakup physics are used to compute the energy deposition, which can then be compared against measured light curves and used to estimate ground damage due to airburst events. This work assesses and compares energy deposition results from four existing approaches to asteroid breakup modeling, and presents a new model that combines key elements of those approaches. The existing approaches considered include a liquid drop or "pancake" model where the object is treated as a single deforming body, and a set of discrete fragment models where the object breaks progressively into individual fragments. The new model incorporates both independent fragments and aggregate debris clouds to represent a broader range of fragmentation behaviors and reproduce more detailed light curve features. All five models are used to estimate the energy deposition rate versus altitude for the Chelyabinsk meteor impact, and results are compared with an observationally derived energy deposition curve. Comparisons show that four of the five approaches are able to match the overall observed energy deposition profile, but the features of the combined model are needed to better replicate both the primary and secondary peaks of the Chelyabinsk curve.
A Bayesian Approach for Analyzing Longitudinal Structural Equation Models
Song, Xin-Yuan; Lu, Zhao-Hua; Hser, Yih-Ing; Lee, Sik-Yum
2011-01-01
This article considers a Bayesian approach for analyzing a longitudinal 2-level nonlinear structural equation model with covariates, and mixed continuous and ordered categorical variables. The first-level model is formulated for measures taken at each time point nested within individuals for investigating their characteristics that are dynamically…
An Empirical-Mathematical Modelling Approach to Upper Secondary Physics
Angell, Carl; Kind, Per Morten; Henriksen, Ellen K.; Guttersrud, Oystein
2008-01-01
In this paper we describe a teaching approach focusing on modelling in physics, emphasizing scientific reasoning based on empirical data and using the notion of multiple representations of physical phenomena as a framework. We describe modelling activities from a project (PHYS 21) and relate some experiences from implementation of the modelling…
An Alternative Approach for Nonlinear Latent Variable Models
Mooijaart, Ab; Bentler, Peter M.
2010-01-01
In the last decades there has been an increasing interest in nonlinear latent variable models. Since the seminal paper of Kenny and Judd, several methods have been proposed for dealing with these kinds of models. This article introduces an alternative approach. The methodology involves fitting some third-order moments in addition to the means and…
Refining the Committee Approach and Uncertainty Prediction in Hydrological Modelling
Kayastha, N.
2014-01-01
Due to the complexity of hydrological systems a single model may be unable to capture the full range of a catchment response and accurately predict the streamflows. The multi modelling approach opens up possibilities for handling such difficulties and allows improve the predictive capability of mode
Refining the committee approach and uncertainty prediction in hydrological modelling
Kayastha, N.
2014-01-01
Due to the complexity of hydrological systems a single model may be unable to capture the full range of a catchment response and accurately predict the streamflows. The multi modelling approach opens up possibilities for handling such difficulties and allows improve the predictive capability of mode
A new approach for modeling generalization gradients: A case for Hierarchical Models
Directory of Open Access Journals (Sweden)
Koen eVanbrabant
2015-05-01
Full Text Available A case is made for the use of hierarchical models in the analysis of generalization gradients. Hierarchical models overcome several restrictions that are imposed by repeated measures analysis-of-variance (rANOVA, the default statistical method in current generalization research. More specifically, hierarchical models allow to include continuous independent variables and overcomes problematic assumptions such as sphericity. We focus on how generalization research can benefit from this added flexibility. In a simulation study we demonstrate the dominance of hierarchical models over rANOVA. In addition, we show the lack of efficiency of the Mauchly's sphericity test in sample sizes typical for generalization research, and confirm how violations of sphericity increase the probability of type I errors. A worked example of a hierarchical model is provided, with a specific emphasis on the interpretation of parameters relevant for generalization research.
A new approach for modeling generalization gradients: a case for hierarchical models.
Vanbrabant, Koen; Boddez, Yannick; Verduyn, Philippe; Mestdagh, Merijn; Hermans, Dirk; Raes, Filip
2015-01-01
A case is made for the use of hierarchical models in the analysis of generalization gradients. Hierarchical models overcome several restrictions that are imposed by repeated measures analysis-of-variance (rANOVA), the default statistical method in current generalization research. More specifically, hierarchical models allow to include continuous independent variables and overcomes problematic assumptions such as sphericity. We focus on how generalization research can benefit from this added flexibility. In a simulation study we demonstrate the dominance of hierarchical models over rANOVA. In addition, we show the lack of efficiency of the Mauchly's sphericity test in sample sizes typical for generalization research, and confirm how violations of sphericity increase the probability of type I errors. A worked example of a hierarchical model is provided, with a specific emphasis on the interpretation of parameters relevant for generalization research.
Decision-Theoretic Planning: Structural Assumptions and Computational Leverage
Boutilier, C; Hanks, S; 10.1613/jair.575
2011-01-01
Planning under uncertainty is a central problem in the study of automated sequential decision making, and has been addressed by researchers in many different fields, including AI planning, decision analysis, operations research, control theory and economics. While the assumptions and perspectives adopted in these areas often differ in substantial ways, many planning problems of interest to researchers in these fields can be modeled as Markov decision processes (MDPs) and analyzed using the techniques of decision theory. This paper presents an overview and synthesis of MDP-related methods, showing how they provide a unifying framework for modeling many classes of planning problems studied in AI. It also describes structural properties of MDPs that, when exhibited by particular classes of problems, can be exploited in the construction of optimal or approximately optimal policies or plans. Planning problems commonly possess structure in the reward and value functions used to describe performance criteria, in the...
Feldman, Betsy J.; Masyn, Katherine E.; Conger, Rand D.
2009-01-01
Analyzing problem-behavior trajectories can be difficult. The data are generally categorical and often quite skewed, violating distributional assumptions of standard normal-theory statistical models. In this article, the authors present several currently available modeling options, all of which make appropriate distributional assumptions for the…
A multilevel approach to modeling of porous bioceramics
Mikushina, Valentina A.; Sidorenko, Yury N.
2015-10-01
The paper is devoted to discussion of multiscale models of heterogeneous materials using principles. The specificity of approach considered is the using of geometrical model of composites representative volume, which must be generated with taking the materials reinforcement structure into account. In framework of such model may be considered different physical processes which have influence on the effective mechanical properties of composite, in particular, the process of damage accumulation. It is shown that such approach can be used to prediction the value of composite macroscopic ultimate strength. As an example discussed the particular problem of the study the mechanical properties of biocomposite representing porous ceramics matrix filled with cortical bones tissue.
Gray-box modelling approach for description of storage tunnel
DEFF Research Database (Denmark)
Harremoës, Poul; Carstensen, Jacob
1999-01-01
of the water in the overflow structures. The capacity of a pump draining the storage tunnel is estimated for two different rain events, revealing that the pump was malfunctioning during the first rain event. The proposed modeling approach can be used in automated online surveillance and control and implemented....... The model in the present paper provides on-line information on overflow volumes, pumping capacities, and remaining storage capacities. A linear overflow relation is found, differing significantly from the traditional deterministic modeling approach. The linearity of the formulas is explained by the inertia...
A study of multidimensional modeling approaches for data warehouse
Yusof, Sharmila Mat; Sidi, Fatimah; Ibrahim, Hamidah; Affendey, Lilly Suriani
2016-08-01
Data warehouse system is used to support the process of organizational decision making. Hence, the system must extract and integrate information from heterogeneous data sources in order to uncover relevant knowledge suitable for decision making process. However, the development of data warehouse is a difficult and complex process especially in its conceptual design (multidimensional modeling). Thus, there have been various approaches proposed to overcome the difficulty. This study surveys and compares the approaches of multidimensional modeling and highlights the issues, trend and solution proposed to date. The contribution is on the state of the art of the multidimensional modeling design.
Meta-analysis a structural equation modeling approach
Cheung, Mike W-L
2015-01-01
Presents a novel approach to conducting meta-analysis using structural equation modeling. Structural equation modeling (SEM) and meta-analysis are two powerful statistical methods in the educational, social, behavioral, and medical sciences. They are often treated as two unrelated topics in the literature. This book presents a unified framework on analyzing meta-analytic data within the SEM framework, and illustrates how to conduct meta-analysis using the metaSEM package in the R statistical environment. Meta-Analysis: A Structural Equation Modeling Approach begins by introducing the impo
Modeling and Algorithmic Approaches to Constitutively-Complex, Microstructured Fluids
Energy Technology Data Exchange (ETDEWEB)
Miller, Gregory H. [Univ. of California, Davis, CA (United States); Forest, Gregory [Univ. of California, Davis, CA (United States)
2014-05-01
We present a new multiscale model for complex fluids based on three scales: microscopic, kinetic, and continuum. We choose the microscopic level as Kramers' bead-rod model for polymers, which we describe as a system of stochastic differential equations with an implicit constraint formulation. The associated Fokker-Planck equation is then derived, and adiabatic elimination removes the fast momentum coordinates. Approached in this way, the kinetic level reduces to a dispersive drift equation. The continuum level is modeled with a finite volume Godunov-projection algorithm. We demonstrate computation of viscoelastic stress divergence using this multiscale approach.
Raichoor, A; Nakata, F; Stanford, S A; Holden, B P; Rettura, A; Huertas-Company, M; Postman, M; Rosati, P; Blakeslee, J P; Demarco, R; Eisenhardt, P; Illingworth, G; Jee, M J; Kodama, T; Tanaka, M; White, R L
2011-01-01
We have derived masses and ages for 79 early-type galaxies (ETGs) in different environments at z~1.3 in the Lynx supercluster and in the GOODS/CDF-S field using multiwavelength (0.6-4.5 $\\mu$m; KPNO, Palomar, Keck, HST, Spitzer) datasets. At this redshift the contribution of the TP-AGB phase is important for ETGs, and the mass and age estimates depend on the choice of the stellar population model used in the spectral energy distribution fits. We describe in detail the differences among model predictions for a large range of galaxy ages, showing the dependence of these differences on age. Current models still yield large uncertainties. While recent models from Maraston and Charlot & Bruzual offer better modeling of the TP-AGB phase with respect to less recent Bruzual & Charlot models, their predictions do not often match. The modeling of this TP-AGB phase has a significant impact on the derived parameters for galaxies observed at high-redshift. Some of our results do not depend on the choice of the mod...
Metamodelling Approach and Software Tools for Physical Modelling and Simulation
Directory of Open Access Journals (Sweden)
Vitaliy Mezhuyev
2015-02-01
Full Text Available In computer science, metamodelling approach becomes more and more popular for the purpose of software systems development. In this paper, we discuss applicability of the metamodelling approach for development of software tools for physical modelling and simulation.To define a metamodel for physical modelling the analysis of physical models will be done. The result of such the analyses will show the invariant physical structures, we propose to use as the basic abstractions of the physical metamodel. It is a system of geometrical objects, allowing to build a spatial structure of physical models and to set a distribution of physical properties. For such geometry of distributed physical properties, the different mathematical methods can be applied. To prove the proposed metamodelling approach, we consider the developed prototypes of software tools.
Social learning in Models and Cases - an Interdisciplinary Approach
Buhl, Johannes; De Cian, Enrica; Carrara, Samuel; Monetti, Silvia; Berg, Holger
2016-04-01
Our paper follows an interdisciplinary understanding of social learning. We contribute to the literature on social learning in transition research by bridging case-oriented research and modelling-oriented transition research. We start by describing selected theories on social learning in innovation, diffusion and transition research. We present theoretical understandings of social learning in techno-economic and agent-based modelling. Then we elaborate on empirical research on social learning in transition case studies. We identify and synthetize key dimensions of social learning in transition case studies. In the following we bridge between more formal and generalising modelling approaches towards social learning processes and more descriptive, individualising case study approaches by interpreting the case study analysis into a visual guide on functional forms of social learning typically identified in the cases. We then try to exemplarily vary functional forms of social learning in integrated assessment models. We conclude by drawing the lessons learned from the interdisciplinary approach - methodologically and empirically.
Learning the Task Management Space of an Aircraft Approach Model
Krall, Joseph; Menzies, Tim; Davies, Misty
2014-01-01
Validating models of airspace operations is a particular challenge. These models are often aimed at finding and exploring safety violations, and aim to be accurate representations of real-world behavior. However, the rules governing the behavior are quite complex: nonlinear physics, operational modes, human behavior, and stochastic environmental concerns all determine the responses of the system. In this paper, we present a study on aircraft runway approaches as modeled in Georgia Tech's Work Models that Compute (WMC) simulation. We use a new learner, Genetic-Active Learning for Search-Based Software Engineering (GALE) to discover the Pareto frontiers defined by cognitive structures. These cognitive structures organize the prioritization and assignment of tasks of each pilot during approaches. We discuss the benefits of our approach, and also discuss future work necessary to enable uncertainty quantification.
Jäntschi, Lorentz; Bálint, Donatella; Bolboacă, Sorana D
2016-01-01
Multiple linear regression analysis is widely used to link an outcome with predictors for better understanding of the behaviour of the outcome of interest. Usually, under the assumption that the errors follow a normal distribution, the coefficients of the model are estimated by minimizing the sum of squared deviations. A new approach based on maximum likelihood estimation is proposed for finding the coefficients on linear models with two predictors without any constrictive assumptions on the distribution of the errors. The algorithm was developed, implemented, and tested as proof-of-concept using fourteen sets of compounds by investigating the link between activity/property (as outcome) and structural feature information incorporated by molecular descriptors (as predictors). The results on real data demonstrated that in all investigated cases the power of the error is significantly different by the convenient value of two when the Gauss-Laplace distribution was used to relax the constrictive assumption of the normal distribution of the error. Therefore, the Gauss-Laplace distribution of the error could not be rejected while the hypothesis that the power of the error from Gauss-Laplace distribution is normal distributed also failed to be rejected.
Building enterprise reuse program--A model-based approach
Institute of Scientific and Technical Information of China (English)
梅宏; 杨芙清
2002-01-01
Reuse is viewed as a realistically effective approach to solving software crisis. For an organization that wants to build a reuse program, technical and non-technical issues must be considered in parallel. In this paper, a model-based approach to building systematic reuse program is presented. Component-based reuse is currently a dominant approach to software reuse. In this approach, building the right reusable component model is the first important step. In order to achieve systematic reuse, a set of component models should be built from different perspectives. Each of these models will give a specific view of the components so as to satisfy different needs of different persons involved in the enterprise reuse program. There already exist some component models for reuse from technical perspectives. But less attention is paid to the reusable components from a non-technical view, especially from the view of process and management. In our approach, a reusable component model--FLP model for reusable component--is introduced. This model describes components from three dimensions (Form, Level, and Presentation) and views components and their relationships from the perspective of process and management. It determines the sphere of reusable components, the time points of reusing components in the development process, and the needed means to present components in terms of the abstraction level, logic granularity and presentation media. Being the basis on which the management and technical decisions are made, our model will be used as the kernel model to initialize and normalize a systematic enterprise reuse program.
Current approaches to model extracellular electrical neural microstimulation
Directory of Open Access Journals (Sweden)
Sébastien eJoucla
2014-02-01
Full Text Available Nowadays, high-density microelectrode arrays provide unprecedented possibilities to precisely activate spatially well-controlled central nervous system (CNS areas. However, this requires optimizing stimulating devices, which in turn requires a good understanding of the effects of microstimulation on cells and tissues. In this context, modeling approaches provide flexible ways to predict the outcome of electrical stimulation in terms of CNS activation. In this paper, we present state-of-the-art modeling methods with sufficient details to allow the reader to rapidly build numerical models of neuronal extracellular microstimulation. These include 1 the computation of the electrical potential field created by the stimulation in the tissue, and 2 the response of a target neuron to this field. Two main approaches are described: First we describe the classical hybrid approach that combines the finite element modeling of the potential field with the calculation of the neuron’s response in a cable equation framework (compartmentalized neuron models. Then, we present a whole finite element approach allows the simultaneous calculation of the extracellular and intracellular potentials, by representing the neuronal membrane with a thin-film approximation. This approach was previously introduced in the frame of neural recording, but has never been implemented to determine the effect of extracellular stimulation on the neural response at a sub-compartment level. Here, we show on an example that the latter modeling scheme can reveal important sub-compartment behavior of the neural membrane that cannot be resolved using the hybrid approach. The goal of this paper is also to describe in detail the practical implementation of these methods to allow the reader to easily build new models using standard software packages. These modeling paradigms, depending on the situation, should help build more efficient high-density neural prostheses for CNS rehabilitation.
Directory of Open Access Journals (Sweden)
Stéphane Colard
2014-12-01
Full Text Available There has been rapid growth in the use of electronic cigarettes (“vaping” in Europe, North America and elsewhere. With such increased prevalence, there is currently a debate on whether the aerosol exhaled following the use of e-cigarettes has implications for the quality of air breathed by bystanders. Conducting chemical analysis of the indoor environment can be costly and resource intensive, limiting the number of studies which can be conducted. However, this can be modelled reasonably accurately based on empirical emissions data and using some basic assumptions. Here, we present a simplified model, based on physical principles, which considers aerosol propagation, dilution and extraction to determine the potential contribution of a single puff from an e-cigarette to indoor air. From this, it was then possible to simulate the cumulative effect of vaping over time. The model was applied to a virtual, but plausible, scenario considering an e-cigarette user and a non-user working in the same office space. The model was also used to reproduce published experimental studies and showed good agreement with the published values of indoor air nicotine concentration. With some additional refinements, such an approach may be a cost-effective and rapid way of assessing the potential exposure of bystanders to exhaled e-cigarette aerosol constituents.
Colard, Stéphane; O'Connell, Grant; Verron, Thomas; Cahours, Xavier; Pritchard, John D
2014-12-24
There has been rapid growth in the use of electronic cigarettes ("vaping") in Europe, North America and elsewhere. With such increased prevalence, there is currently a debate on whether the aerosol exhaled following the use of e-cigarettes has implications for the quality of air breathed by bystanders. Conducting chemical analysis of the indoor environment can be costly and resource intensive, limiting the number of studies which can be conducted. However, this can be modelled reasonably accurately based on empirical emissions data and using some basic assumptions. Here, we present a simplified model, based on physical principles, which considers aerosol propagation, dilution and extraction to determine the potential contribution of a single puff from an e-cigarette to indoor air. From this, it was then possible to simulate the cumulative effect of vaping over time. The model was applied to a virtual, but plausible, scenario considering an e-cigarette user and a non-user working in the same office space. The model was also used to reproduce published experimental studies and showed good agreement with the published values of indoor air nicotine concentration. With some additional refinements, such an approach may be a cost-effective and rapid way of assessing the potential exposure of bystanders to exhaled e-cigarette aerosol constituents.
Benchmarking novel approaches for modelling species range dynamics.
Zurell, Damaris; Thuiller, Wilfried; Pagel, Jörn; Cabral, Juliano S; Münkemüller, Tamara; Gravel, Dominique; Dullinger, Stefan; Normand, Signe; Schiffers, Katja H; Moore, Kara A; Zimmermann, Niklaus E
2016-08-01
Increasing biodiversity loss due to climate change is one of the most vital challenges of the 21st century. To anticipate and mitigate biodiversity loss, models are needed that reliably project species' range dynamics and extinction risks. Recently, several new approaches to model range dynamics have been developed to supplement correlative species distribution models (SDMs), but applications clearly lag behind model development. Indeed, no comparative analysis has been performed to evaluate their performance. Here, we build on process-based, simulated data for benchmarking five range (dynamic) models of varying complexity including classical SDMs, SDMs coupled with simple dispersal or more complex population dynamic models (SDM hybrids), and a hierarchical Bayesian process-based dynamic range model (DRM). We specifically test the effects of demographic and community processes on model predictive performance. Under current climate, DRMs performed best, although only marginally. Under climate change, predictive performance varied considerably, with no clear winners. Yet, all range dynamic models improved predictions under climate change substantially compared to purely correlative SDMs, and the population dynamic models also predicted reasonable extinction risks for most scenarios. When benchmarking data were simulated with more complex demographic and community processes, simple SDM hybrids including only dispersal often proved most reliable. Finally, we found that structural decisions during model building can have great impact on model accuracy, but prior system knowledge on important processes can reduce these uncertainties considerably. Our results reassure the clear merit in using dynamic approaches for modelling species' response to climate change but also emphasize several needs for further model and data improvement. We propose and discuss perspectives for improving range projections through combination of multiple models and for making these approaches
Application of the Interface Approach in Quantum Ising Models
Sen, Parongama
1997-01-01
We investigate phase transitions in the Ising model and the ANNNI model in transverse field using the interface approach. The exact result of the Ising chain in a transverse field is reproduced. We find that apart from the interfacial energy, there are two other response functions which show simple scaling behaviour. For the ANNNI model in a transverse field, the phase diagram can be fully studied in the region where a ferromagnetic to paramagnetic phase transition occurs. The other region ca...
A Variable Flow Modelling Approach To Military End Strength Planning
2016-12-01
System Dynamics (SD) model is ideal for strategic analysis as it encompasses all the behaviours of a system and how the behaviours are influenced by...Markov Chain Models Wang describes Markov chain theory as a mathematical tool used to investigate dynamic behaviours of a system in a discrete-time... MODELLING APPROACH TO MILITARY END STRENGTH PLANNING by Benjamin K. Grossi December 2016 Thesis Advisor: Kenneth Doerr Second Reader
New Approaches in Usable Booster System Life Cycle Cost Modeling
2012-01-01
Lean NPD practices (many) • Lean Production & Operations Practices (many) • Supply Chain Operations Reference ( SCOR ) Model , Best Practices Make Deliver...NEW APPROACHES IN REUSABLE BOOSTER SYSTEM LIFE CYCLE COST MODELING Edgar Zapata National Aeronautics and Space Administration Kennedy Space Center...Kennedy Space Center (KSC) and the Air Force Research Laboratory (AFRL). The work included the creation of a new cost estimating model and an LCC
Energy Technology Data Exchange (ETDEWEB)
Staub, Isabelle; Fredriksson, Anders; Outters, Nils [Golder Associates AB, Uppsala (Sweden)
2002-05-01
In the purpose of studying the possibilities of a Deep Repository for spent fuel, the Swedish Nuclear and Fuel Management Company (SKB) is currently planning for Site Investigations. Data collected from these Site Investigations are interpreted and analysed to achieve the full Site Description, which is built up of models from all the disciplines that are considered of importance for the Site Description. One of these models is the Rock Mechanical Descriptive Model,which would be developed for any site in hard crystalline rock, and is a combination and evaluation of the characterisation of rock mass by means of empirical relationships and a theoretical approach based on numerical modelling. The present report describes the theoretical approach. The characterisation of the mechanical properties of the rock mass, viewed as a unit consisting of intact rock and fractures, is achieved by numerical simulations with following input parameters: initial stresses, fracture geometry, distribution of rock mechanical properties, such as deformation and strength parameters, for the intact rock and for the fractures. The numerical modelling was performed with the two-dimensional code UDEC, and the rock block models were generated from 2D trace sections extracted from the 3D Discrete Fracture Network (DFN) model. Assumptions and uncertainties related to the set-up of the model are considered. The numerical model was set-up to simulate a plain strain-loading test. Different boundary conditions were applied on the model for simulating stress conditions (I) in the undisturbed rock mass, and (II) at the proximity of a tunnel. In order to assess the reliability of the model sensitivity analyses have been conducted on some rock block models for defining the dependency of mechanical properties to in situ stresses, the influence of boundary conditions, rock material and joint constitutive models used to simulate the behaviour of intact rock and fractures, domain size and anisotropy. To
Mesa-Frias, Marco; Chalabi, Zaid; Foss, Anna M
2013-09-01
Health impact assessment (HIA) is often used to determine ex ante the health impact of an environmental policy or an environmental intervention. Underpinning any HIA is the framing assumption, which defines the causal pathways mapping environmental exposures to health outcomes. The sensitivity of the HIA to the framing assumptions is often ignored. A novel method based on fuzzy cognitive map (FCM) is developed to quantify the framing assumptions in the assessment stage of a HIA, and is then applied to a housing intervention (tightening insulation) as a case-study. Framing assumptions of the case-study were identified through a literature search of Ovid Medline (1948-2011). The FCM approach was used to identify the key variables that have the most influence in a HIA. Changes in air-tightness, ventilation, indoor air quality and mould/humidity have been identified as having the most influence on health. The FCM approach is widely applicable and can be used to inform the formulation of the framing assumptions in any quantitative HIA of environmental interventions. We argue that it is necessary to explore and quantify framing assumptions prior to conducting a detailed quantitative HIA during the assessment stage.
Iterative approach to modeling subsurface stormflow based on nonlinear, hillslope-scale physics
Directory of Open Access Journals (Sweden)
J. H. Spaaks
2009-08-01
Full Text Available Soil water transport in small, humid, upland catchments is often dominated by subsurface stormflow. Recent studies of this process suggest that at the plot scale, generation of transient saturation may be governed by threshold behavior, and that transient saturation is a prerequisite for lateral flow. The interaction between these plot scale processes yields complex behavior at the hillslope scale. We argue that this complexity should be incorporated into our models. We take an iterative approach to developing our model, starting with a very simple representation of hillslope rainfall-runoff. Next, we design new virtual experiments with which we test our model, while adding more structural complexity. In this study, we present results from three such development cycles, corresponding to three different hillslope-scale, lumped models. Model_{1} is a linear tank model, which assumes transient saturation to be homogeneously distributed over the hillslope. Model_{2} assumes transient saturation to be heterogeneously distributed over the hillslope, and that the spatial distribution of the saturated zone does not vary with time. Model_{3} assumes that transient saturation is heterogeneous both in space and in time. We found that the homogeneity assumption underlying Model_{1} resulted in hillslope discharge being too steep during the first part of the rising limb, but not steep enough on the second part. Also, peak height was underestimated. The additional complexity in Model_{2} improved the simulations in terms of the fit, but not in terms of the dynamics. The threshold-based Model_{3} captured most of the hydrograph dynamics (Nash-Sutcliffe efficiency of 0.98. After having assessed our models in a lumped setup, we then compared Model_{1} to Model_{3} in a spatially explicit setup, and evaluated what patterns of subsurface flow were possible with model elements of each type. We found
An approach for modeling sediment budgets in supply-limited rivers
Wright, Scott A.; Topping, David J.; Rubin, David M.; Melis, Theodore S.
2010-01-01
Reliable predictions of sediment transport and river morphology in response to variations in natural and human-induced drivers are necessary for river engineering and management. Because engineering and management applications may span a wide range of space and time scales, a broad spectrum of modeling approaches has been developed, ranging from suspended-sediment "rating curves" to complex three-dimensional morphodynamic models. Suspended sediment rating curves are an attractive approach for evaluating changes in multi-year sediment budgets resulting from changes in flow regimes because they are simple to implement, computationally efficient, and the empirical parameters can be estimated from quantities that are commonly measured in the field (i.e., suspended sediment concentration and water discharge). However, the standard rating curve approach assumes a unique suspended sediment concentration for a given water discharge. This assumption is not valid in rivers where sediment supply varies enough to cause changes in particle size or changes in areal coverage of sediment on the bed; both of these changes cause variations in suspended sediment concentration for a given water discharge. More complex numerical models of hydraulics and morphodynamics have been developed to address such physical changes of the bed. This additional complexity comes at a cost in terms of computations as well as the type and amount of data required for model setup, calibration, and testing. Moreover, application of the resulting sediment-transport models may require observations of bed-sediment boundary conditions that require extensive (and expensive) observations or, alternatively, require the use of an additional model (subject to its own errors) merely to predict the bed-sediment boundary conditions for use by the transport model. In this paper we present a hybrid approach that combines aspects of the rating curve method and the more complex morphodynamic models. Our primary objective
An approach for modeling sediment budgets in supply-limited rivers
Wright, Scott A.; Topping, David J.; Rubin, David M.; Melis, Theodore S.
2010-10-01
Reliable predictions of sediment transport and river morphology in response to variations in natural and human-induced drivers are necessary for river engineering and management. Because engineering and management applications may span a wide range of space and time scales, a broad spectrum of modeling approaches has been developed, ranging from suspended-sediment "rating curves" to complex three-dimensional morphodynamic models. Suspended sediment rating curves are an attractive approach for evaluating changes in multi-year sediment budgets resulting from changes in flow regimes because they are simple to implement, computationally efficient, and the empirical parameters can be estimated from quantities that are commonly measured in the field (i.e., suspended sediment concentration and water discharge). However, the standard rating curve approach assumes a unique suspended sediment concentration for a given water discharge. This assumption is not valid in rivers where sediment supply varies enough to cause changes in particle size or changes in areal coverage of sediment on the bed; both of these changes cause variations in suspended sediment concentration for a given water discharge. More complex numerical models of hydraulics and morphodynamics have been developed to address such physical changes of the bed. This additional complexity comes at a cost in terms of computations as well as the type and amount of data required for model setup, calibration, and testing. Moreover, application of the resulting sediment-transport models may require observations of bed-sediment boundary conditions that require extensive (and expensive) observations or, alternatively, require the use of an additional model (subject to its own errors) merely to predict the bed-sediment boundary conditions for use by the transport model. In this paper we present a hybrid approach that combines aspects of the rating curve method and the more complex morphodynamic models. Our primary objective
A Flexible Web-Based Approach to Modeling Tandem Photocatalytic Devices
DEFF Research Database (Denmark)
Seger, Brian; Hansen, Ole; Vesborg, Peter Christian Kjærgaard
2017-01-01
There have been several works modeling the optimal band gaps for tandem photocatalytic water splitting devices under different assumptions. Due to the many parameters involved, it is impossible for the authors to consider every conceivable situation. In this work, we have developed a web-based mo......There have been several works modeling the optimal band gaps for tandem photocatalytic water splitting devices under different assumptions. Due to the many parameters involved, it is impossible for the authors to consider every conceivable situation. In this work, we have developed a web...... previous experimental photoelectrodes, and quantitatively relates their performance to what would typically be expected via modeling programs....
Leakage-Resilient Circuits without Computational Assumptions
DEFF Research Database (Denmark)
Dziembowski, Stefan; Faust, Sebastian
2012-01-01
Physical cryptographic devices inadvertently leak information through numerous side-channels. Such leakage is exploited by so-called side-channel attacks, which often allow for a complete security breache. A recent trend in cryptography is to propose formal models to incorporate leakage into the ......Physical cryptographic devices inadvertently leak information through numerous side-channels. Such leakage is exploited by so-called side-channel attacks, which often allow for a complete security breache. A recent trend in cryptography is to propose formal models to incorporate leakage...... into the model and to construct schemes that are provably secure within them. We design a general compiler that transforms any cryptographic scheme, e.g., a block-cipher, into a functionally equivalent scheme which is resilient to any continual leakage provided that the following three requirements are satisfied...
THE FAIRSHARES MODEL: AN ETHICAL APPROACH TO SOCIAL ENTERPRISE DEVELOPMENT?
Ridley-Duff, R.
2015-01-01
This paper is based on the keynote address to the 14th International Association of Public and Non-Profit Marketing (IAPNM) conference. It explore the question "What impact do ethical values in the FairShares Model have on social entrepreneurial behaviour?" In the first part, three broad approaches to social enterprise are set out: co-operative and mutual enterprises (CMEs), social and responsible businesses (SRBs) and charitable trading activities (CTAs). The ethics that guide each approach ...
Non-racemic mixture model: a computational approach.
Polanco, Carlos; Buhse, Thomas
2017-01-01
The behavior of a slight chiral bias in favor of l-amino acids over d-amino acids was studied in an evolutionary mathematical model generating mixed chiral peptide hexamers. The simulations aimed to reproduce a very generalized prebiotic scenario involving a specified couple of amino acid enantiomers and a possible asymmetric amplification through autocatalytic peptide self-replication while forming small multimers of a defined length. Our simplified model allowed the observation of a small ascending but not conclusive tendency in the l-amino acid over the d-amino acid profile for the resulting mixed chiral hexamers in computer simulations of 100 peptide generations. This simulation was carried out by changing the chiral bias from 1% to 3%, in three stages of 15, 50 and 100 generations to observe any alteration that could mean a drastic change in behavior. So far, our simulations lead to the assumption that under the exposure of very slight non-racemic conditions, a significant bias between l- and d-amino acids, as present in our biosphere, was unlikely generated under prebiotic conditions if autocatalytic peptide self-replication was the main or the only driving force of chiral auto-amplification.
A computational language approach to modeling prose recall in schizophrenia.
Rosenstein, Mark; Diaz-Asper, Catherine; Foltz, Peter W; Elvevåg, Brita
2014-06-01
Many cortical disorders are associated with memory problems. In schizophrenia, verbal memory deficits are a hallmark feature. However, the exact nature of this deficit remains elusive. Modeling aspects of language features used in memory recall have the potential to provide means for measuring these verbal processes. We employ computational language approaches to assess time-varying semantic and sequential properties of prose recall at various retrieval intervals (immediate, 30 min and 24 h later) in patients with schizophrenia, unaffected siblings and healthy unrelated control participants. First, we model the recall data to quantify the degradation of performance with increasing retrieval interval and the effect of diagnosis (i.e., group membership) on performance. Next we model the human scoring of recall performance using an n-gram language sequence technique, and then with a semantic feature based on Latent Semantic Analysis. These models show that automated analyses of the recalls can produce scores that accurately mimic human scoring. The final analysis addresses the validity of this approach by ascertaining the ability to predict group membership from models built on the two classes of language features. Taken individually, the semantic feature is most predictive, while a model combining the features improves accuracy of group membership prediction slightly above the semantic feature alone as well as over the human rating approach. We discuss the implications for cognitive neuroscience of such a computational approach in exploring the mechanisms of prose recall.
Intelligent Transportation and Evacuation Planning A Modeling-Based Approach
Naser, Arab
2012-01-01
Intelligent Transportation and Evacuation Planning: A Modeling-Based Approach provides a new paradigm for evacuation planning strategies and techniques. Recently, evacuation planning and modeling have increasingly attracted interest among researchers as well as government officials. This interest stems from the recent catastrophic hurricanes and weather-related events that occurred in the southeastern United States (Hurricane Katrina and Rita). The evacuation methods that were in place before and during the hurricanes did not work well and resulted in thousands of deaths. This book offers insights into the methods and techniques that allow for implementing mathematical-based, simulation-based, and integrated optimization and simulation-based engineering approaches for evacuation planning. This book also: Comprehensively discusses the application of mathematical models for evacuation and intelligent transportation modeling Covers advanced methodologies in evacuation modeling and planning Discusses principles a...
Measuring sound absorption using local field assumptions
Kuipers, E.R.
2013-01-01
To more effectively apply acoustically absorbing materials, it is desirable to measure angle-dependent sound absorption coefficients, preferably in situ. Existing measurement methods are based on an overall model of the acoustic field in front of the absorber, and are therefore sensitive to
Public key cryptography from weaker assumptions
DEFF Research Database (Denmark)
Zottarel, Angela
This dissertation is focused on the construction of public key cryptographic primitives and on the relative security analysis in a meaningful theoretic model. This work takes two orthogonal directions. In the first part, we study cryptographic constructions preserving their security properties also...
A model selection approach to analysis of variance and covariance.
Alber, Susan A; Weiss, Robert E
2009-06-15
An alternative to analysis of variance is a model selection approach where every partition of the treatment means into clusters with equal value is treated as a separate model. The null hypothesis that all treatments are equal corresponds to the partition with all means in a single cluster. The alternative hypothesis correspond to the set of all other partitions of treatment means. A model selection approach can also be used for a treatment by covariate interaction, where the null hypothesis and each alternative correspond to a partition of treatments into clusters with equal covariate effects. We extend the partition-as-model approach to simultaneous inference for both treatment main effect and treatment interaction with a continuous covariate with separate partitions for the intercepts and treatment-specific slopes. The model space is the Cartesian product of the intercept partition and the slope partition, and we develop five joint priors for this model space. In four of these priors the intercept and slope partition are dependent. We advise on setting priors over models, and we use the model to analyze an orthodontic data set that compares the frictional resistance created by orthodontic fixtures. Copyright (c) 2009 John Wiley & Sons, Ltd.
Towards a whole-cell modeling approach for synthetic biology
Purcell, Oliver; Jain, Bonny; Karr, Jonathan R.; Covert, Markus W.; Lu, Timothy K.
2013-06-01
Despite rapid advances over the last decade, synthetic biology lacks the predictive tools needed to enable rational design. Unlike established engineering disciplines, the engineering of synthetic gene circuits still relies heavily on experimental trial-and-error, a time-consuming and inefficient process that slows down the biological design cycle. This reliance on experimental tuning is because current modeling approaches are unable to make reliable predictions about the in vivo behavior of synthetic circuits. A major reason for this lack of predictability is that current models view circuits in isolation, ignoring the vast number of complex cellular processes that impinge on the dynamics of the synthetic circuit and vice versa. To address this problem, we present a modeling approach for the design of synthetic circuits in the context of cellular networks. Using the recently published whole-cell model of Mycoplasma genitalium, we examined the effect of adding genes into the host genome. We also investigated how codon usage correlates with gene expression and find agreement with existing experimental results. Finally, we successfully implemented a synthetic Goodwin oscillator in the whole-cell model. We provide an updated software framework for the whole-cell model that lays the foundation for the integration of whole-cell models with synthetic gene circuit models. This software framework is made freely available to the community to enable future extensions. We envision that this approach will be critical to transforming the field of synthetic biology into a rational and predictive engineering discipline.
A transformation approach for collaboration based requirement models
Harbouche, Ahmed; Mokhtari, Aicha
2012-01-01
Distributed software engineering is widely recognized as a complex task. Among the inherent complexities is the process of obtaining a system design from its global requirement specification. This paper deals with such transformation process and suggests an approach to derive the behavior of a given system components, in the form of distributed Finite State Machines, from the global system requirements, in the form of an augmented UML Activity Diagrams notation. The process of the suggested approach is summarized in three steps: the definition of the appropriate source Meta-Model (requirements Meta-Model), the definition of the target Design Meta-Model and the definition of the rules to govern the transformation during the derivation process. The derivation process transforms the global system requirements described as UML diagram activities (extended with collaborations) to system roles behaviors represented as UML finite state machines. The approach is implemented using Atlas Transformation Language (ATL).
A TRANSFORMATION APPROACH FOR COLLABORATION BASED REQUIREMENT MODELS
Directory of Open Access Journals (Sweden)
Ahmed Harbouche
2012-02-01
Full Text Available Distributed software engineering is widely recognized as a complex task. Among the inherent complexitiesis the process of obtaining a system design from its global requirement specification. This paper deals withsuch transformation process and suggests an approach to derive the behavior of a given systemcomponents, in the form of distributed Finite State Machines, from the global system requirements, in theform of an augmented UML Activity Diagrams notation. The process of the suggested approach issummarized in three steps: the definition of the appropriate source Meta-Model (requirements Meta-Model, the definition of the target Design Meta-Model and the definition of the rules to govern thetransformation during the derivation process. The derivation process transforms the global systemrequirements described as UML diagram activities (extended with collaborations to system rolesbehaviors represented as UML finite state machines. The approach is implemented using AtlasTransformation Language (ATL.
Loss Modeling with a Data-Driven Approach in Event-Based Rainfall-Runoff Analysis
Chua, L. H. C.
2012-04-01
Mathematical models require the estimation of rainfall abstractions for accurate predictions of runoff. Although loss models such as the constant loss and exponential loss models are commonly used, these methods are based on simplified assumptions of the physical process. A new approach based on the data driven paradigm to estimate rainfall abstractions is proposed in this paper. The proposed data driven model, based on the artificial neural network (ANN) does not make any assumptions on the loss behavior. The estimated discharge from a physically-based model, obtained from the kinematic wave (KW) model assuming zero losses, was used as the only input to the ANN. The output is the measured discharge. Thus, the ANN functions as a black-box loss model. Two sets of data were analyzed for this study. The first dataset consists of rainfall and runoff data, measured from an artificial catchment (area = 25 m2) comprising two overland planes (slope = 11%), 25m long, transversely inclined towards a rectangular channel (slope = 2%) which conveyed the flow, recorded using calibrated weigh tanks, to the outlet. Two rain gauges, each placed 6.25 m from either ends of the channel, were used to record rainfall. Data for six storm events over the period between October 2002 and December 2002 were analyzed. The second dataset was obtained from the Upper Bukit Timah catchment (area = 6.4 km2) instrumented with two rain gauges and a flow measuring station. A total of six events recorded between November 1987 and July 1988 were selected for this study. The runoff predicted by the ANN was compared with the measured runoff. In addition, results from KW models developed for both the catchments were used as a benchmark. The KW models were calibrated assuming the loss rate for an average event for each of the datasets. The results from both the ANN and KW models agreed well with the runoff measured from the artificial catchment. The KW model is expected to perform well since the catchment
Challenging the assumptions for thermal sensation scales
DEFF Research Database (Denmark)
Schweiker, Marcel; Fuchs, Xaver; Becker, Susanne
2016-01-01
extensively, which is suitable for describing a one-dimensional relationship between physical parameters of indoor environments and subjective thermal sensation. However, human thermal comfort is not merely a physiological but also a psychological phenomenon. Thus, it should be investigated how scales for its...... the complex relationships between physical parameters, subjective perceptions and measurement-related issues) need to be addressed by the field of thermal comfort and new approaches developed.......Scales are widely used to assess the personal experience of thermal conditions in built environments. Most commonly, thermal sensation is assessed, mainly to determine whether a particular thermal condition is comfortable for individuals. A seven-point thermal sensation scale has been used...
An algebraic approach to modeling in software engineering
Energy Technology Data Exchange (ETDEWEB)
Loegel, G.J. [Superconducting Super Collider Lab., Dallas, TX (United States)]|[Michigan Univ., Ann Arbor, MI (United States); Ravishankar, C.V. [Michigan Univ., Ann Arbor, MI (United States)
1993-09-01
Our work couples the formalism of universal algebras with the engineering techniques of mathematical modeling to develop a new approach to the software engineering process. Our purpose in using this combination is twofold. First, abstract data types and their specification using universal algebras can be considered a common point between the practical requirements of software engineering and the formal specification of software systems. Second, mathematical modeling principles provide us with a means for effectively analyzing real-world systems. We first use modeling techniques to analyze a system and then represent the analysis using universal algebras. The rest of the software engineering process exploits properties of universal algebras that preserve the structure of our original model. This paper describes our software engineering process and our experience using it on both research and commercial systems. We need a new approach because current software engineering practices often deliver software that is difficult to develop and maintain. Formal software engineering approaches use universal algebras to describe ``computer science`` objects like abstract data types, but in practice software errors are often caused because ``real-world`` objects are improperly modeled. There is a large semantic gap between the customer`s objects and abstract data types. In contrast, mathematical modeling uses engineering techniques to construct valid models for real-world systems, but these models are often implemented in an ad hoc manner. A combination of the best features of both approaches would enable software engineering to formally specify and develop software systems that better model real systems. Software engineering, like mathematical modeling, should concern itself first and foremost with understanding a real system and its behavior under given circumstances, and then with expressing this knowledge in an executable form.
DISTRIBUTED APPROACH to WEB PAGE CATEGORIZATION USING MAPREDUCE PROGRAMMING MODEL
Directory of Open Access Journals (Sweden)
P.Malarvizhi
2011-12-01
Full Text Available The web is a large repository of information and to facilitate the search and retrieval of pages from it,categorization of web documents is essential. An effective means to handle the complexity of information retrieval from the internet is through automatic classification of web pages. Although lots of automatic classification algorithms and systems have been presented, most of the existing approaches are computationally challenging. In order to overcome this challenge, we have proposed a parallel algorithm, known as MapReduce programming model to automatically categorize the web pages. This approach incorporates three concepts. They are web crawler, MapReduce programming model and the proposed web page categorization approach. Initially, we have utilized web crawler to mine the World Wide Web and the crawled web pages are then directly given as input to the MapReduce programming model. Here the MapReduce programming model adapted to our proposed web page categorization approach finds the appropriate category of the web page according to its content. The experimental results show that our proposed parallel web page categorization approach achieves satisfactory results in finding the right category for any given web page.
Energy Technology Data Exchange (ETDEWEB)
Bhaduri, Budhendra L [ORNL; Liu, Cheng [ORNL; Nutaro, James J [ORNL; Patterson, Lauren A [ORNL
2008-01-01
Numerous socio-environmental studies, including those in public health, utilize population data as one of the essential elements of modeling and analysis. Typically population data are reported by administrative or accounting units. For example, in the US the Census Bureau reports population counts by census blocks, block groups, and tracts. At any resolution, a uniform population distribution is assumed and the population figures and demographic characteristics are typically associated with block (polygon) centroids. In geographic analyses these points are considered representative of the population for census polygons. Traditional spatial modeling approaches commonly include intersection of census data with buffers of influence to quantify target population, using either inclusion-exclusion (of the centroids) or the area weighted population estimation methods. However, it is well understood that uniform population distribution is the weakest assumption and by considering census polygon centroids as representative of population all analytical approaches are very likely to overestimate or underestimate the analytical results. Given that population is spatially restricted by Census accounting units (such as blocks), there often is great uncertainty about spatial distribution of residents within those accounting units. This is particularly appropriate in suburban and rural areas, where the population is dispersed to a greater degree than urban areas. Because of this uncertainty, there is significant potential to misclassify people with respect to their location from pollution sources, and consequently it becomes challenging to determine if certain sub-populations are actually more likely than others to get differential environmental exposure. In this paper, we describe development and utilization of a high resolution demographic data driven approach for modeling and simulation at Oak Ridge National Laboratory.
Kirkwood, Adrian; Price, Linda
2013-01-01
This paper examines assumptions and beliefs underpinning research into educational technology. It critically reviews some approaches used to investigate the impact of technologies for teaching and learning. It focuses on comparative studies, performance comparisons and attitudinal studies to illustrate how under-examined assumptions lead to…
Ehren, Melanie C. M.; Leeuw, Frans L.; Scheerens, Jaap
2001-01-01
This article uses a policy scientific approach to reconstruct assumptions underlying the Dutch Educational Supervision Act.We showan example of howto reconstruct and evaluate a program theory that is based on legislation of inspection. The assumptions explain how inspection leads to school improveme
Teaching Service Modelling to a Mixed Class: An Integrated Approach
Directory of Open Access Journals (Sweden)
Jeremiah D. DENG
2015-04-01
Full Text Available Service modelling has become an increasingly important area in today's telecommunications and information systems practice. We have adapted a Network Design course in order to teach service modelling to a mixed class of both the telecommunication engineering and information systems backgrounds. An integrated approach engaging mathematics teaching with strategies such as problem-solving, visualization, and the use of examples and simulations, has been developed. From assessment on student learning outcomes, it is indicated that the proposed course delivery approach succeeded in bringing out comparable and satisfactory performance from students of different educational backgrounds.
Assumptions of Customer Knowledge Enablement in the Open Innovation Process
Directory of Open Access Journals (Sweden)
Jokubauskienė Raminta
2017-08-01
Full Text Available In the scientific literature, open innovation is one of the most effective means to innovate and gain a competitive advantage. In practice, there is a variety of open innovation activities, but, nevertheless, customers stand as the cornerstone in this area, since the customers’ knowledge is one of the most important sources of new knowledge and ideas. Evaluating the context where are the interactions of open innovation and customer knowledge enablement, it is necessary to take into account the importance of customer knowledge management. Increasingly it is highlighted that customers’ knowledge management facilitates the creation of innovations. However, it should be an examination of other factors that influence the open innovation, and, at the same time, customers’ knowledge management. This article presents a theoretical model, which reveals the assumptions of open innovation process and the impact on the firm’s performance.
Experimental assessment of unvalidated assumptions in classical plasticity theory.
Energy Technology Data Exchange (ETDEWEB)
Brannon, Rebecca Moss (University of Utah, Salt Lake City, UT); Burghardt, Jeffrey A. (University of Utah, Salt Lake City, UT); Bauer, Stephen J.; Bronowski, David R.
2009-01-01
This report investigates the validity of several key assumptions in classical plasticity theory regarding material response to changes in the loading direction. Three metals, two rock types, and one ceramic were subjected to non-standard loading directions, and the resulting strain response increments were displayed in Gudehus diagrams to illustrate the approximation error of classical plasticity theories. A rigorous mathematical framework for fitting classical theories to the data, thus quantifying the error, is provided. Further data analysis techniques are presented that allow testing for the effect of changes in loading direction without having to use a new sample and for inferring the yield normal and flow directions without having to measure the yield surface. Though the data are inconclusive, there is indication that classical, incrementally linear, plasticity theory may be inadequate over a certain range of loading directions. This range of loading directions also coincides with loading directions that are known to produce a physically inadmissible instability for any nonassociative plasticity model.