WorldWideScience

Sample records for models variables defined

  1. Defining a Family of Cognitive Diagnosis Models Using Log-Linear Models with Latent Variables

    Science.gov (United States)

    Henson, Robert A.; Templin, Jonathan L.; Willse, John T.

    2009-01-01

    This paper uses log-linear models with latent variables (Hagenaars, in "Loglinear Models with Latent Variables," 1993) to define a family of cognitive diagnosis models. In doing so, the relationship between many common models is explicitly defined and discussed. In addition, because the log-linear model with latent variables is a general model for…

  2. A Core Language for Separate Variability Modeling

    DEFF Research Database (Denmark)

    Iosif-Lazăr, Alexandru Florin; Wasowski, Andrzej; Schaefer, Ina

    2014-01-01

    Separate variability modeling adds variability to a modeling language without requiring modifications of the language or the supporting tools. We define a core language for separate variability modeling using a single kind of variation point to define transformations of software artifacts in object...... hierarchical dependencies between variation points via copying and flattening. Thus, we reduce a model with intricate dependencies to a flat executable model transformation consisting of simple unconditional local variation points. The core semantics is extremely concise: it boils down to two operational rules...

  3. Strong influence of variable treatment on the performance of numerically defined ecological regions.

    Science.gov (United States)

    Snelder, Ton; Lehmann, Anthony; Lamouroux, Nicolas; Leathwick, John; Allenbach, Karin

    2009-10-01

    Numerical clustering has frequently been used to define hierarchically organized ecological regionalizations, but there has been little robust evaluation of their performance (i.e., the degree to which regions discriminate areas with similar ecological character). In this study we investigated the effect of the weighting and treatment of input variables on the performance of regionalizations defined by agglomerative clustering across a range of hierarchical levels. For this purpose, we developed three ecological regionalizations of Switzerland of increasing complexity using agglomerative clustering. Environmental data for our analysis were drawn from a 400 m grid and consisted of estimates of 11 environmental variables for each grid cell describing climate, topography and lithology. Regionalization 1 was defined from the environmental variables which were given equal weights. We used the same variables in Regionalization 2 but weighted and transformed them on the basis of a dissimilarity model that was fitted to land cover composition data derived for a random sample of cells from interpretation of aerial photographs. Regionalization 3 was a further two-stage development of Regionalization 2 where specific classifications, also weighted and transformed using dissimilarity models, were applied to 25 small scale "sub-domains" defined by Regionalization 2. Performance was assessed in terms of the discrimination of land cover composition for an independent set of sites using classification strength (CS), which measured the similarity of land cover composition within classes and the dissimilarity between classes. Regionalization 2 performed significantly better than Regionalization 1, but the largest gains in performance, compared to Regionalization 1, occurred at coarse hierarchical levels (i.e., CS did not increase significantly beyond the 25-region level). Regionalization 3 performed better than Regionalization 2 beyond the 25-region level and CS values continued to

  4. Variable Bandwidth Analog Channel Filters for Software Defined Radio

    NARCIS (Netherlands)

    Arkesteijn, V.J.; Klumperink, Eric A.M.; Nauta, Bram

    2001-01-01

    An important aspect of Software Defined Radio is the ability to define the bandwidth of the filter that selects the desired channel. This paper first explains the importance of channel filtering. Then the advantage of analog channel filtering with a variable bandwidth in a Software Defined Radio is

  5. Bayesian modeling of measurement error in predictor variables

    NARCIS (Netherlands)

    Fox, Gerardus J.A.; Glas, Cornelis A.W.

    2003-01-01

    It is shown that measurement error in predictor variables can be modeled using item response theory (IRT). The predictor variables, that may be defined at any level of an hierarchical regression model, are treated as latent variables. The normal ogive model is used to describe the relation between

  6. High-Q Variable Bandwidth Passive Filters for Software Defined Radio

    NARCIS (Netherlands)

    Arkesteijn, V.J.; Klumperink, Eric A.M.; Nauta, Bram

    2001-01-01

    An important aspect of Software Defined Radio is the ability to define the bandwidth of the filter that selects the desired channel. This paper describes a technique for channel filtering, in which two passive filters are combined to obtain a variable bandwidth. Passive filters have the advantage of

  7. High-Q variable bandwidth passive filters for Software Defined Radio

    NARCIS (Netherlands)

    Arkesteijn, V.J.; Klumperink, Eric A.M.; Nauta, Bram

    An important aspect of Software Defined Radio is the ability to define the bandwidth of the filter that selects the desired channel. This paper describes a technique for channel filtering, in which two passive filters are combined to obtain a variable bandwidth. Passive filters have the advantage of

  8. On the Use of Variability Operations in the V-Modell XT Software Process Line

    DEFF Research Database (Denmark)

    Kuhrmann, Marco; Méndez Fernández, Daniel; Ternité, Thomas

    2016-01-01

    . In this article, we present a study on the feasibility of variability operations to support the development of software process lines in the context of the V-Modell XT. We analyze which variability operations are defined and practically used. We provide an initial catalog of variability operations...... as an improvement proposal for other process models. Our findings show that 69 variability operation types are defined across several metamodel versions of which, however, 25 remain unused. The found variability operations allow for systematically modifying the content of process model elements and the process......Software process lines provide a systematic approach to develop and manage software processes. It defines a reference process containing general process assets, whereas a well-defined customization approach allows process engineers to create new process variants, e.g., by extending or modifying...

  9. Can Geostatistical Models Represent Nature's Variability? An Analysis Using Flume Experiments

    Science.gov (United States)

    Scheidt, C.; Fernandes, A. M.; Paola, C.; Caers, J.

    2015-12-01

    The lack of understanding in the Earth's geological and physical processes governing sediment deposition render subsurface modeling subject to large uncertainty. Geostatistics is often used to model uncertainty because of its capability to stochastically generate spatially varying realizations of the subsurface. These methods can generate a range of realizations of a given pattern - but how representative are these of the full natural variability? And how can we identify the minimum set of images that represent this natural variability? Here we use this minimum set to define the geostatistical prior model: a set of training images that represent the range of patterns generated by autogenic variability in the sedimentary environment under study. The proper definition of the prior model is essential in capturing the variability of the depositional patterns. This work starts with a set of overhead images from an experimental basin that showed ongoing autogenic variability. We use the images to analyze the essential characteristics of this suite of patterns. In particular, our goal is to define a prior model (a minimal set of selected training images) such that geostatistical algorithms, when applied to this set, can reproduce the full measured variability. A necessary prerequisite is to define a measure of variability. In this study, we measure variability using a dissimilarity distance between the images. The distance indicates whether two snapshots contain similar depositional patterns. To reproduce the variability in the images, we apply an MPS algorithm to the set of selected snapshots of the sedimentary basin that serve as training images. The training images are chosen from among the initial set by using the distance measure to ensure that only dissimilar images are chosen. Preliminary investigations show that MPS can reproduce fairly accurately the natural variability of the experimental depositional system. Furthermore, the selected training images provide

  10. Higher-dimensional cosmological model with variable gravitational ...

    Indian Academy of Sciences (India)

    variable G and bulk viscosity in Lyra geometry. Exact solutions for ... a comparative study of Robertson–Walker models with a constant deceleration .... where H is defined as H =(˙A/A)+(1/3)( ˙B/B) and β0,H0 are representing present values of β ...

  11. Latent variable modeling%建立隐性变量模型

    Institute of Scientific and Technical Information of China (English)

    蔡力

    2012-01-01

    @@ A latent variable model, as the name suggests,is a statistical model that contains latent, that is, unobserved, variables.Their roots go back to Spearman's 1904 seminal work[1] on factor analysis,which is arguably the first well-articulated latent variable model to be widely used in psychology, mental health research, and allied disciplines.Because of the association of factor analysis with early studies of human intelligence, the fact that key variables in a statistical model are, on occasion, unobserved has been a point of lingering contention and controversy.The reader is assured, however, that a latent variable,defined in the broadest manner, is no more mysterious than an error term in a normal theory linear regression model or a random effect in a mixed model.

  12. Two-Part Models for Fractional Responses Defined as Ratios of Integers

    Directory of Open Access Journals (Sweden)

    Harald Oberhofer

    2014-09-01

    Full Text Available This paper discusses two alternative two-part models for fractional response variables that are defined as ratios of integers. The first two-part model assumes a Binomial distribution and known group size. It nests the one-part fractional response model proposed by Papke and Wooldridge (1996 and, thus, allows one to apply Wald, LM and/or LR tests in order to discriminate between the two models. The second model extends the first one by allowing for overdispersion in the data. We demonstrate the usefulness of the proposed two-part models for data on the 401(k pension plan participation rates used in Papke and Wooldridge (1996.

  13. Spatial variability and parametric uncertainty in performance assessment models

    International Nuclear Information System (INIS)

    Pensado, Osvaldo; Mancillas, James; Painter, Scott; Tomishima, Yasuo

    2011-01-01

    The problem of defining an appropriate treatment of distribution functions (which could represent spatial variability or parametric uncertainty) is examined based on a generic performance assessment model for a high-level waste repository. The generic model incorporated source term models available in GoldSim ® , the TDRW code for contaminant transport in sparse fracture networks with a complex fracture-matrix interaction process, and a biosphere dose model known as BDOSE TM . Using the GoldSim framework, several Monte Carlo sampling approaches and transport conceptualizations were evaluated to explore the effect of various treatments of spatial variability and parametric uncertainty on dose estimates. Results from a model employing a representative source and ensemble-averaged pathway properties were compared to results from a model allowing for stochastic variation of transport properties along streamline segments (i.e., explicit representation of spatial variability within a Monte Carlo realization). We concluded that the sampling approach and the definition of an ensemble representative do influence consequence estimates. In the examples analyzed in this paper, approaches considering limited variability of a transport resistance parameter along a streamline increased the frequency of fast pathways resulting in relatively high dose estimates, while those allowing for broad variability along streamlines increased the frequency of 'bottlenecks' reducing dose estimates. On this basis, simplified approaches with limited consideration of variability may suffice for intended uses of the performance assessment model, such as evaluation of site safety. (author)

  14. A New Bi-Directional Projection Model Based on Pythagorean Uncertain Linguistic Variable

    OpenAIRE

    Huidong Wang; Shifan He; Xiaohong Pan

    2018-01-01

    To solve the multi-attribute decision making (MADM) problems with Pythagorean uncertain linguistic variable, an extended bi-directional projection method is proposed. First, we utilize the linguistic scale function to convert uncertain linguistic variable and provide a new projection model, subsequently. Then, to depict the bi-directional projection method, the formative vectors of alternatives and ideal alternatives are defined. Furthermore, a comparative analysis with projection model is co...

  15. Variable selection for modelling effects of eutrophication on stream and river ecosystems

    NARCIS (Netherlands)

    Nijboer, R.C.; Verdonschot, P.F.M.

    2004-01-01

    Models are needed for forecasting the effects of eutrophication on stream and river ecosystems. Most of the current models do not include differences in local stream characteristics and effects on the biota. To define the most important variables that should be used in a stream eutrophication model,

  16. Variable importance in latent variable regression models

    NARCIS (Netherlands)

    Kvalheim, O.M.; Arneberg, R.; Bleie, O.; Rajalahti, T.; Smilde, A.K.; Westerhuis, J.A.

    2014-01-01

    The quality and practical usefulness of a regression model are a function of both interpretability and prediction performance. This work presents some new graphical tools for improved interpretation of latent variable regression models that can also assist in improved algorithms for variable

  17. Inter-operator Variability in Defining Uterine Position Using Three-dimensional Ultrasound Imaging

    DEFF Research Database (Denmark)

    Baker, Mariwan; Jensen, Jørgen Arendt; Behrens, Claus F.

    2013-01-01

    significantly larger inter-fractional uterine positional displacement, in some cases up to 20 mm, which outweighs the magnitude of current inter-operator variations. Thus, the current US-phantom-study suggests that the inter-operator variability in addressing uterine position is clinically irrelevant.......In radiotherapy the treatment outcome of gynecological (GYN) cancer patients is crucially related to reproducibility of the actual uterine position. The purpose of this study is to evaluate the inter-operator variability in addressing uterine position using a novel 3-D ultrasound (US) system....... The study is initiated by US-scanning of a uterine phantom (CIRS 404, Universal Medical, Norwood, USA) by seven experienced US operators. The phantom represents a female pelvic region, containing a uterus, bladder and rectal landmarks readily definable in the acquired US-scans. The organs are subjected...

  18. High Variability Is a Defining Component of Mediterranean-Climate Rivers and Their Biota

    Directory of Open Access Journals (Sweden)

    Núria Cid

    2017-01-01

    Full Text Available Variability in flow as a result of seasonal precipitation patterns is a defining element of streams and rivers in Mediterranean-climate regions of the world and strongly influences the biota of these unique systems. Mediterranean-climate areas include the Mediterranean Basin and parts of Australia, California, Chile, and South Africa. Mediterranean streams and rivers can experience wet winters and consequent floods to severe droughts, when intermittency in otherwise perennial systems can occur. Inter-annual variation in precipitation can include multi-year droughts or consecutive wet years. Spatial variation in patterns of precipitation (rain vs. snow combined with topographic variability lead to spatial variability in hydrologic patterns that influence populations and communities. Mediterranean streams and rivers are global biodiversity hotspots and are particularly vulnerable to human impacts. Biomonitoring, conservation efforts, and management responses to climate change require approaches that account for spatial and temporal variability (including both intra- and inter-annual. The importance of long-term data sets for understanding and managing these systems highlights the need for sustained and coordinated research efforts in Mediterranean-climate streams and rivers.

  19. Defining generic architecture for Cloud IaaS provisioning model

    NARCIS (Netherlands)

    Demchenko, Y.; de Laat, C.; Mavrin, A.; Leymann, F.; Ivanov, I.; van Sinderen, M.; Shishkov, B.

    2011-01-01

    Infrastructure as a Service (IaaS) is one of the provisioning models for Clouds as defined in the NIST Clouds definition. Although widely used, current IaaS implementations and solutions doesn’t have common and well defined architecture model. The paper attempts to define a generic architecture for

  20. On the ""early-time"" evolution of variables relevant to turbulence models for the Rayleigh-Taylor instability

    Energy Technology Data Exchange (ETDEWEB)

    Rollin, Bertrand [Los Alamos National Laboratory; Andrews, Malcolm J [Los Alamos National Laboratory

    2010-01-01

    We present our progress toward setting initial conditions in variable density turbulence models. In particular, we concentrate our efforts on the BHR turbulence model for turbulent Rayleigh-Taylor instability. Our approach is to predict profiles of relevant variables before fully turbulent regime and use them as initial conditions for the turbulence model. We use an idealized model of mixing between two interpenetrating fluids to define the initial profiles for the turbulence model variables. Velocities and volume fractions used in the idealized mixing model are obtained respectively from a set of ordinary differential equations modeling the growth of the Rayleigh-Taylor instability and from an idealization of the density profile in the mixing layer. A comparison between predicted profiles for the turbulence model variables and profiles of the variables obtained from low Atwood number three dimensional simulations show reasonable agreement.

  1. The Functional Segregation and Integration Model: Mixture Model Representations of Consistent and Variable Group-Level Connectivity in fMRI

    DEFF Research Database (Denmark)

    Churchill, Nathan William; Madsen, Kristoffer Hougaard; Mørup, Morten

    2016-01-01

    flexibility: they only estimate segregated structure and do not model interregional functional connectivity, nor do they account for network variability across voxels or between subjects. To address these issues, this letter develops the functional segregation and integration model (FSIM). This extension......The brain consists of specialized cortical regions that exchange information between each other, reflecting a combination of segregated (local) and integrated (distributed) processes that define brain function. Functional magnetic resonance imaging (fMRI) is widely used to characterize...... brain regions where network expression predicts subject age in the experimental data. Thus, the FSIM is effective at summarizing functional connectivity structure in group-level fMRI, with applications in modeling the relationships between network variability and behavioral/demographic variables....

  2. 47 CFR 76.1904 - Encoding rules for defined business models.

    Science.gov (United States)

    2010-10-01

    ... 47 Telecommunication 4 2010-10-01 2010-10-01 false Encoding rules for defined business models. 76... defined business models. (a) Commercial audiovisual content delivered as unencrypted broadcast television... the Commission pursuant to a petition with respect to a defined business model other than unencrypted...

  3. A New Bi-Directional Projection Model Based on Pythagorean Uncertain Linguistic Variable

    Directory of Open Access Journals (Sweden)

    Huidong Wang

    2018-04-01

    Full Text Available To solve the multi-attribute decision making (MADM problems with Pythagorean uncertain linguistic variable, an extended bi-directional projection method is proposed. First, we utilize the linguistic scale function to convert uncertain linguistic variable and provide a new projection model, subsequently. Then, to depict the bi-directional projection method, the formative vectors of alternatives and ideal alternatives are defined. Furthermore, a comparative analysis with projection model is conducted to show the superiority of bi-directional projection method. Finally, an example of graduate’s job option is given to demonstrate the effectiveness and feasibility of the proposed method.

  4. IN THE MAZE OF E-COMMERCE. ONLINE TRADE DEFINING VARIABLES IN ROMANIA

    Directory of Open Access Journals (Sweden)

    Erika KULCSÁR

    2017-05-01

    Full Text Available The number of those articles dealing with the issue of online trade is significant both at international and national level. Among the main identified themes addressed in this present article are the following: (a. the characteristics that define the segment of those who purchase via the Internet, (b. the influencing factors which play a crucial role at purchases made online, (c. the identification of those variables through which online consumer behavior can be studied (d. the advantages offered by the Internet, and therefore by online trade. The purpose of this article is to understand and know the buying habits of online customers. The main variables included in the analysis are the following: (1 type of customer, (2 customers’ residency, (3 the day of the online order, (4 time interval/time frame when the order was placed (4 ordered brands, (5 the average value of orders.

  5. Incorporating additional tree and environmental variables in a lodgepole pine stem profile model

    Science.gov (United States)

    John C. Byrne

    1993-01-01

    A new variable-form segmented stem profile model is developed for lodgepole pine (Pinus contorta) trees from the northern Rocky Mountains of the United States. I improved estimates of stem diameter by predicting two of the model coefficients with linear equations using a measure of tree form, defined as a ratio of dbh and total height. Additional improvements were...

  6. Hanford defined waste model limitations and improvements

    International Nuclear Information System (INIS)

    HARMSEN, R.W.

    1999-01-01

    Recommendation 93-5 Implementation Plan, Milestone 5,6.3.1.i requires issuance of this report which addresses ''updates to the tank contents model''. This report summarizes the review of the Hanford Defined Waste, Revision 4, model limitations and provides conclusions and recommendations for potential updates to the model

  7. Defining Generic Architecture for Cloud Infrastructure as a Service model

    NARCIS (Netherlands)

    Demchenko, Y.; de Laat, C.

    2011-01-01

    Infrastructure as a Service (IaaS) is one of the provisioning models for Clouds as defined in the NIST Clouds definition. Although widely used, current IaaS implementations and solutions doesn’t have common and well defined architecture model. The paper attempts to define a generic architecture for

  8. Using structural equation modeling to investigate relationships among ecological variables

    Science.gov (United States)

    Malaeb, Z.A.; Kevin, Summers J.; Pugesek, B.H.

    2000-01-01

    Structural equation modeling is an advanced multivariate statistical process with which a researcher can construct theoretical concepts, test their measurement reliability, hypothesize and test a theory about their relationships, take into account measurement errors, and consider both direct and indirect effects of variables on one another. Latent variables are theoretical concepts that unite phenomena under a single term, e.g., ecosystem health, environmental condition, and pollution (Bollen, 1989). Latent variables are not measured directly but can be expressed in terms of one or more directly measurable variables called indicators. For some researchers, defining, constructing, and examining the validity of latent variables may be the end task of itself. For others, testing hypothesized relationships of latent variables may be of interest. We analyzed the correlation matrix of eleven environmental variables from the U.S. Environmental Protection Agency's (USEPA) Environmental Monitoring and Assessment Program for Estuaries (EMAP-E) using methods of structural equation modeling. We hypothesized and tested a conceptual model to characterize the interdependencies between four latent variables-sediment contamination, natural variability, biodiversity, and growth potential. In particular, we were interested in measuring the direct, indirect, and total effects of sediment contamination and natural variability on biodiversity and growth potential. The model fit the data well and accounted for 81% of the variability in biodiversity and 69% of the variability in growth potential. It revealed a positive total effect of natural variability on growth potential that otherwise would have been judged negative had we not considered indirect effects. That is, natural variability had a negative direct effect on growth potential of magnitude -0.3251 and a positive indirect effect mediated through biodiversity of magnitude 0.4509, yielding a net positive total effect of 0

  9. Possibility/Necessity-Based Probabilistic Expectation Models for Linear Programming Problems with Discrete Fuzzy Random Variables

    Directory of Open Access Journals (Sweden)

    Hideki Katagiri

    2017-10-01

    Full Text Available This paper considers linear programming problems (LPPs where the objective functions involve discrete fuzzy random variables (fuzzy set-valued discrete random variables. New decision making models, which are useful in fuzzy stochastic environments, are proposed based on both possibility theory and probability theory. In multi-objective cases, Pareto optimal solutions of the proposed models are newly defined. Computational algorithms for obtaining the Pareto optimal solutions of the proposed models are provided. It is shown that problems involving discrete fuzzy random variables can be transformed into deterministic nonlinear mathematical programming problems which can be solved through a conventional mathematical programming solver under practically reasonable assumptions. A numerical example of agriculture production problems is given to demonstrate the applicability of the proposed models to real-world problems in fuzzy stochastic environments.

  10. DEFINE: A Service-Oriented Dynamically Enabling Function Model

    Directory of Open Access Journals (Sweden)

    Tan Wei-Yi

    2017-01-01

    In this paper, we introduce an innovative Dynamically Enable Function In Network Equipment (DEFINE to allow tenant get the network service quickly. First, DEFINE decouples an application into different functional components, and connects these function components in a reconfigurable method. Second, DEFINE provides a programmable interface to the third party, who can develop their own processing modules according to their own needs. To verify the effectiveness of this model, we set up an evaluating network with a FPGA-based OpenFlow switch prototype, and deployed several applications on it. Our results show that DEFINE has excellent flexibility and performance.

  11. Analyzing and leveraging self-similarity for variable resolution atmospheric models

    Science.gov (United States)

    O'Brien, Travis; Collins, William

    2015-04-01

    Variable resolution modeling techniques are rapidly becoming a popular strategy for achieving high resolution in a global atmospheric models without the computational cost of global high resolution. However, recent studies have demonstrated a variety of resolution-dependent, and seemingly artificial, features. We argue that the scaling properties of the atmosphere are key to understanding how the statistics of an atmospheric model should change with resolution. We provide two such examples. In the first example we show that the scaling properties of the cloud number distribution define how the ratio of resolved to unresolved clouds should increase with resolution. We show that the loss of resolved clouds, in the high resolution region of variable resolution simulations, with the Community Atmosphere Model version 4 (CAM4) is an artifact of the model's treatment of condensed water (this artifact is significantly reduced in CAM5). In the second example we show that the scaling properties of the horizontal velocity field, combined with the incompressibility assumption, necessarily result in an intensification of vertical mass flux as resolution increases. We show that such an increase is present in a wide variety of models, including CAM and the regional climate models of the ENSEMBLES intercomparision. We present theoretical arguments linking this increase to the intensification of precipitation with increasing resolution.

  12. Pre-quantum mechanics. Introduction to models with hidden variables

    International Nuclear Information System (INIS)

    Grea, J.

    1976-01-01

    Within the context of formalism of hidden variable type, the author considers the models used to describe mechanical systems before the introduction of the quantum model. An account is given of the characteristics of the theoretical models and their relationships with experimental methodology. The models of analytical, pre-ergodic, stochastic and thermodynamic mechanics are studied in succession. At each stage the physical hypothesis is enunciated by postulate corresponding to the type of description of the reality of the model. Starting from this postulate, the physical propositions which are meaningful for the model under consideration are defined and their logical structure is indicated. It is then found that on passing from one level of description to another, one can obtain successively Boolean lattices embedded in lattices of continuous geometric type, which are themselves embedded in Boolean lattices. It is therefore possible to envisage a more detailed description than that given by the quantum lattice and to construct it by analogy. (Auth.)

  13. Modeling Psychological Attributes in Psychology – An Epistemological Discussion: Network Analysis vs. Latent Variables

    Science.gov (United States)

    Guyon, Hervé; Falissard, Bruno; Kop, Jean-Luc

    2017-01-01

    Network Analysis is considered as a new method that challenges Latent Variable models in inferring psychological attributes. With Network Analysis, psychological attributes are derived from a complex system of components without the need to call on any latent variables. But the ontological status of psychological attributes is not adequately defined with Network Analysis, because a psychological attribute is both a complex system and a property emerging from this complex system. The aim of this article is to reappraise the legitimacy of latent variable models by engaging in an ontological and epistemological discussion on psychological attributes. Psychological attributes relate to the mental equilibrium of individuals embedded in their social interactions, as robust attractors within complex dynamic processes with emergent properties, distinct from physical entities located in precise areas of the brain. Latent variables thus possess legitimacy, because the emergent properties can be conceptualized and analyzed on the sole basis of their manifestations, without exploring the upstream complex system. However, in opposition with the usual Latent Variable models, this article is in favor of the integration of a dynamic system of manifestations. Latent Variables models and Network Analysis thus appear as complementary approaches. New approaches combining Latent Network Models and Network Residuals are certainly a promising new way to infer psychological attributes, placing psychological attributes in an inter-subjective dynamic approach. Pragmatism-realism appears as the epistemological framework required if we are to use latent variables as representations of psychological attributes. PMID:28572780

  14. Confounding of three binary-variables counterfactual model

    OpenAIRE

    Liu, Jingwei; Hu, Shuang

    2011-01-01

    Confounding of three binary-variables counterfactual model is discussed in this paper. According to the effect between the control variable and the covariate variable, we investigate three counterfactual models: the control variable is independent of the covariate variable, the control variable has the effect on the covariate variable and the covariate variable affects the control variable. Using the ancillary information based on conditional independence hypotheses, the sufficient conditions...

  15. Development of a plug-in for Variability Modeling in Software Product Lines

    Directory of Open Access Journals (Sweden)

    María Lucía López-Araujo

    2012-03-01

    Full Text Available Las Líneas de Productos de Software (LPS toman ventaja económica de las similitudes y variación entre un conjunto de sistemas de software dentro de un dominio específico. La Ingeniería de Líneas de Productos de Software por lo tanto, define una serie de procesos para el desarrollo de LPS que consideran las similitudes y variación a lo largo del ciclo devida. El modelado de variabilidad, en consecuencia, es una actividad esencial en un enfoque de Ingeniería de Líneas de Productos de Software. Existen varias técnicas para modelado de variabilidad. Entre ellas resalta COVAMOF que permite modelar los puntos de variación, variantes y dependencias como entidades de primera clase, proporcionando una manera uniforme de representarlos en los diversos niveles de abstracción de una LPS. Para poder aprovechar los beneficios de COVAMOF es necesario contar con una herramienta, de otra manera el modelado y la administración de la variabilidad pueden resultar una labor ardua para el ingeniero de software. Este trabajo presenta el desarrollo de un plug-in de COVAMOF para Eclipse.Software Product Lines (SPL take economic advantage of commonality and variability among a set of software systems that exist within a specific domain. Therefore, Software Product Line Engineering defines a series of processes for the development of a SPL that consider commonality and variability during the software life cycle. Variability modeling is therefore an essential activity in a Software Product Line Engineering approach. There are several techniques for variability modeling nowadays. COVAMOF stands out among them since it allows the modeling of variation points, variants and dependencies as first class elements. COVAMOF, therefore, provides an uniform manner for representing such concepts in different levels of abstraction within a SPL. In order to take advantage of COVAMOF benefits, it is necessary to have a computer aided tool, otherwise variability modeling and

  16. Straight line fitting and predictions: On a marginal likelihood approach to linear regression and errors-in-variables models

    Science.gov (United States)

    Christiansen, Bo

    2015-04-01

    Linear regression methods are without doubt the most used approaches to describe and predict data in the physical sciences. They are often good first order approximations and they are in general easier to apply and interpret than more advanced methods. However, even the properties of univariate regression can lead to debate over the appropriateness of various models as witnessed by the recent discussion about climate reconstruction methods. Before linear regression is applied important choices have to be made regarding the origins of the noise terms and regarding which of the two variables under consideration that should be treated as the independent variable. These decisions are often not easy to make but they may have a considerable impact on the results. We seek to give a unified probabilistic - Bayesian with flat priors - treatment of univariate linear regression and prediction by taking, as starting point, the general errors-in-variables model (Christiansen, J. Clim., 27, 2014-2031, 2014). Other versions of linear regression can be obtained as limits of this model. We derive the likelihood of the model parameters and predictands of the general errors-in-variables model by marginalizing over the nuisance parameters. The resulting likelihood is relatively simple and easy to analyze and calculate. The well known unidentifiability of the errors-in-variables model is manifested as the absence of a well-defined maximum in the likelihood. However, this does not mean that probabilistic inference can not be made; the marginal likelihoods of model parameters and the predictands have, in general, well-defined maxima. We also include a probabilistic version of classical calibration and show how it is related to the errors-in-variables model. The results are illustrated by an example from the coupling between the lower stratosphere and the troposphere in the Northern Hemisphere winter.

  17. REDUCING PROCESS VARIABILITY BY USING DMAIC MODEL: A CASE STUDY IN BANGLADESH

    Directory of Open Access Journals (Sweden)

    Ripon Kumar Chakrabortty

    2013-03-01

    Full Text Available Now-a-day's many leading manufacturing industry have started to practice Six Sigma and Lean manufacturing concepts to boost up their productivity as well as quality of products. In this paper, the Six Sigma approach has been used to reduce process variability of a food processing industry in Bangladesh. DMAIC (Define,Measure, Analyze, Improve, & Control model has been used to implement the Six Sigma Philosophy. Five phases of the model have been structured step by step respectively. Different tools of Total Quality Management, Statistical Quality Control and Lean Manufacturing concepts likely Quality function deployment, P Control chart, Fish-bone diagram, Analytical Hierarchy Process, Pareto analysis have been used in different phases of the DMAIC model. The process variability have been tried to reduce by identify the root cause of defects and reducing it. The ultimate goal of this study is to make the process lean and increase the level of sigma.

  18. Eutrophication Modeling Using Variable Chlorophyll Approach

    International Nuclear Information System (INIS)

    Abdolabadi, H.; Sarang, A.; Ardestani, M.; Mahjoobi, E.

    2016-01-01

    In this study, eutrophication was investigated in Lake Ontario to identify the interactions among effective drivers. The complexity of such phenomenon was modeled using a system dynamics approach based on a consideration of constant and variable stoichiometric ratios. The system dynamics approach is a powerful tool for developing object-oriented models to simulate complex phenomena that involve feedback effects. Utilizing stoichiometric ratios is a method for converting the concentrations of state variables. During the physical segmentation of the model, Lake Ontario was divided into two layers, i.e., the epilimnion and hypolimnion, and differential equations were developed for each layer. The model structure included 16 state variables related to phytoplankton, herbivorous zooplankton, carnivorous zooplankton, ammonium, nitrate, dissolved phosphorus, and particulate and dissolved carbon in the epilimnion and hypolimnion during a time horizon of one year. The results of several tests to verify the model, close to 1 Nash-Sutcliff coefficient (0.98), the data correlation coefficient (0.98), and lower standard errors (0.96), have indicated well-suited model’s efficiency. The results revealed that there were significant differences in the concentrations of the state variables in constant and variable stoichiometry simulations. Consequently, the consideration of variable stoichiometric ratios in algae and nutrient concentration simulations may be applied in future modeling studies to enhance the accuracy of the results and reduce the likelihood of inefficient control policies.

  19. Variables and equations in hybrid systems with structural changes

    NARCIS (Netherlands)

    Beek, van D.A.

    2001-01-01

    In many models of physical systems, structural changes are common. Such structural changes may cause a variable to change from a differential variable to an algebraic variable, or to a variable that is not defined by an equation at all. Most hybrid modelling languages either restrict the kind of

  20. Variable selection and model choice in geoadditive regression models.

    Science.gov (United States)

    Kneib, Thomas; Hothorn, Torsten; Tutz, Gerhard

    2009-06-01

    Model choice and variable selection are issues of major concern in practical regression analyses, arising in many biometric applications such as habitat suitability analyses, where the aim is to identify the influence of potentially many environmental conditions on certain species. We describe regression models for breeding bird communities that facilitate both model choice and variable selection, by a boosting algorithm that works within a class of geoadditive regression models comprising spatial effects, nonparametric effects of continuous covariates, interaction surfaces, and varying coefficients. The major modeling components are penalized splines and their bivariate tensor product extensions. All smooth model terms are represented as the sum of a parametric component and a smooth component with one degree of freedom to obtain a fair comparison between the model terms. A generic representation of the geoadditive model allows us to devise a general boosting algorithm that automatically performs model choice and variable selection.

  1. Hierarchical Bayesian nonparametric mixture models for clustering with variable relevance determination.

    Science.gov (United States)

    Yau, Christopher; Holmes, Chris

    2011-07-01

    We propose a hierarchical Bayesian nonparametric mixture model for clustering when some of the covariates are assumed to be of varying relevance to the clustering problem. This can be thought of as an issue in variable selection for unsupervised learning. We demonstrate that by defining a hierarchical population based nonparametric prior on the cluster locations scaled by the inverse covariance matrices of the likelihood we arrive at a 'sparsity prior' representation which admits a conditionally conjugate prior. This allows us to perform full Gibbs sampling to obtain posterior distributions over parameters of interest including an explicit measure of each covariate's relevance and a distribution over the number of potential clusters present in the data. This also allows for individual cluster specific variable selection. We demonstrate improved inference on a number of canonical problems.

  2. Testing concordance of instrumental variable effects in generalized linear models with application to Mendelian randomization

    Science.gov (United States)

    Dai, James Y.; Chan, Kwun Chuen Gary; Hsu, Li

    2014-01-01

    Instrumental variable regression is one way to overcome unmeasured confounding and estimate causal effect in observational studies. Built on structural mean models, there has been considerale work recently developed for consistent estimation of causal relative risk and causal odds ratio. Such models can sometimes suffer from identification issues for weak instruments. This hampered the applicability of Mendelian randomization analysis in genetic epidemiology. When there are multiple genetic variants available as instrumental variables, and causal effect is defined in a generalized linear model in the presence of unmeasured confounders, we propose to test concordance between instrumental variable effects on the intermediate exposure and instrumental variable effects on the disease outcome, as a means to test the causal effect. We show that a class of generalized least squares estimators provide valid and consistent tests of causality. For causal effect of a continuous exposure on a dichotomous outcome in logistic models, the proposed estimators are shown to be asymptotically conservative. When the disease outcome is rare, such estimators are consistent due to the log-linear approximation of the logistic function. Optimality of such estimators relative to the well-known two-stage least squares estimator and the double-logistic structural mean model is further discussed. PMID:24863158

  3. Forward and backward dynamics in implicitly defined overlapping generations models

    NARCIS (Netherlands)

    Gardini, L.; Hommes, C.; Tramontana, F.; de Vilder, R.

    2009-01-01

    In dynamic economic models derived from optimization principles, the forward equilibrium dynamics may not be uniquely defined, while the backward dynamics is well defined. We derive properties of the global forward equilibrium paths based on properties of the backward dynamics. We propose the

  4. Defining the end-point of mastication: A conceptual model.

    Science.gov (United States)

    Gray-Stuart, Eli M; Jones, Jim R; Bronlund, John E

    2017-10-01

    The great risks of swallowing are choking and aspiration of food into the lungs. Both are rare in normal functioning humans, which is remarkable given the diversity of foods and the estimated 10 million swallows performed in a lifetime. Nevertheless, it remains a major challenge to define the food properties that are necessary to ensure a safe swallow. Here, the mouth is viewed as a well-controlled processor where mechanical sensory assessment occurs throughout the occlusion-circulation cycle of mastication. Swallowing is a subsequent action. It is proposed here that, during mastication, temporal maps of interfacial property data are generated, which the central nervous system compares against a series of criteria in order to be sure that the bolus is safe to swallow. To determine these criteria, an engineering hazard analysis tool, alongside an understanding of fluid and particle mechanics, is used to deduce the mechanisms by which food may deposit or become stranded during swallowing. These mechanisms define the food properties that must be avoided. By inverting the thinking, from hazards to ensuring safety, six criteria arise which are necessary for a safe-to-swallow bolus. A new conceptual model is proposed to define when food is safe to swallow during mastication. This significantly advances earlier mouth models. The conceptual model proposed in this work provides a framework of decision-making to define when food is safe to swallow. This will be of interest to designers of dietary foods, foods for dysphagia sufferers and will aid the further development of mastication robots for preparation of artificial boluses for digestion research. It enables food designers to influence the swallow-point properties of their products. For example, a product may be designed to satisfy five of the criteria for a safe-to-swallow bolus, which means the sixth criterion and its attendant food properties define the swallow-point. Alongside other organoleptic factors, these

  5. Model for defining the level of implementation of the management functions in small enterprises

    Directory of Open Access Journals (Sweden)

    Dragan Mišetić

    2001-01-01

    Full Text Available Small enterprises, based on private ownership and entrepreneurial capability, represent, for the majority of the scientific and professional public, the prime movers of economic growth, both in developed market economies and in the economies of countries in transition. At the same time, various studies show that the main reason for the bankruptcy of many small enterprises (more than 90% can be found in weak management, i.e. unacquaintance with management functions (planning, organization, human resources management, leading and control and with the need of implementing those functions in practice. Although it is not easy to define the ingredients of the recipe for success or to define precisely the importance of different elements, and regardless of the fact that many authors think that the management theory for large enterprises is inapplicable for the small ones, we all agree that the owner/manager and his implementation of the management theory has a decisive influence on small enterprises in modern economic circumstances. Therefore, the author of this work is hereby representing the model, which defines the level of implementation of management functions in small enterprises, as well as three systems/levels (danger, risk, progress in which small enterprises may find themselves. After the level of implementation of the management function is identified, it is possible to undertake some corrective actions, which will remove the found failures. While choosing the variables of the model, the author took into consideration specific features of a small enterprise, as well as specific features of its owner/manager.

  6. Modeling of a 3DTV service in the software-defined networking architecture

    Science.gov (United States)

    Wilczewski, Grzegorz

    2014-11-01

    In this article a newly developed concept towards modeling of a multimedia service offering stereoscopic motion imagery is presented. Proposed model is based on the approach of utilization of Software-defined Networking or Software Defined Networks architecture (SDN). The definition of 3D television service spanning SDN concept is identified, exposing basic characteristic of a 3DTV service in a modern networking organization layout. Furthermore, exemplary functionalities of the proposed 3DTV model are depicted. It is indicated that modeling of a 3DTV service in the Software-defined Networking architecture leads to multiplicity of improvements, especially towards flexibility of a service supporting heterogeneity of end user devices.

  7. A comparison of elastic-plastic and variable modulus-cracking constitutive models for prestressed concrete reactor vessels

    International Nuclear Information System (INIS)

    Anderson, C.A.; Smith, P.D.

    1979-01-01

    Numerical prediction of the behavior of prestressed concrete reactor vessels (PCRVs) under static, dynamic and long term loadings is complicated by the currently ill-defined behavior of concrete under stress and the three-dimensional nature of PCRVs. Which constitutive model most closely approximates the behavior of concrete in PCRVs under load has not yet been decided. Many equations for accurately modeling the three-dimensional behavior of PCRVs tax the capability of a most up-to-date computing system. The main purpose of this paper is to compare the characteristics of two constitutive models which have been proposed for concrete, variable modulus cracking model and elastic-plastic model. Moreover, the behavior of typical concrete structures was compared, the materials of which obey these constitutive laws. The response to internal pressure of PCRV structure, the constitutive models for concrete, the test problems using a thick-walled concrete ring and a rectangular concrete plate, and the analysis of an axisymmetric concrete pressure vessel PV-26 using the variable modulus cracking model of the ADINA code are explained. The variable modulus cracking model can predict the behavior of reinforced concrete structures well into the range of nonlinear behavior. (Kako, I.)

  8. A Formal Model and Verification Problems for Software Defined Networks

    Directory of Open Access Journals (Sweden)

    V. A. Zakharov

    2013-01-01

    Full Text Available Software-defined networking (SDN is an approach to building computer networks that separate and abstract data planes and control planes of these systems. In a SDN a centralized controller manages a distributed set of switches. A set of open commands for packet forwarding and flow-table updating was defined in the form of a protocol known as OpenFlow. In this paper we describe an abstract formal model of SDN, introduce a tentative language for specification of SDN forwarding policies, and set up formally model-checking problems for SDN.

  9. Handbook of latent variable and related models

    CERN Document Server

    Lee, Sik-Yum

    2011-01-01

    This Handbook covers latent variable models, which are a flexible class of models for modeling multivariate data to explore relationships among observed and latent variables.- Covers a wide class of important models- Models and statistical methods described provide tools for analyzing a wide spectrum of complicated data- Includes illustrative examples with real data sets from business, education, medicine, public health and sociology.- Demonstrates the use of a wide variety of statistical, computational, and mathematical techniques.

  10. Derivation and application of mathematical model for well test analysis with variable skin factor in hydrocarbon reservoirs

    Directory of Open Access Journals (Sweden)

    Pengcheng Liu

    2016-06-01

    Full Text Available Skin factor is often regarded as a constant in most of the mathematical model for well test analysis in oilfields, but this is only a kind of simplified treatment with the actual skin factor changeable. This paper defined the average permeability of a damaged area as a function of time by using the definition of skin factor. Therefore a relationship between a variable skin factor and time was established. The variable skin factor derived was introduced into existing traditional models rather than using a constant skin factor, then, this newly derived mathematical model for well test analysis considering variable skin factor was solved by Laplace transform. The dimensionless wellbore pressure and its derivative changed with dimensionless time were plotted with double logarithm and these plots can be used for type curve fitting. The effects of all the parameters in the expression of variable skin factor were analyzed based on the dimensionless wellbore pressure and its derivative. Finally, actual well testing data were used to fit the type curves developed which validates the applicability of the mathematical model from Sheng-2 Block, Shengli Oilfield, China.

  11. The application of an internal state variable model to the viscoplastic behavior of irradiated ASTM 304L stainless steel

    Energy Technology Data Exchange (ETDEWEB)

    McAnulty, Michael J., E-mail: mcanulmj@id.doe.gov [Department of Energy, 1955 Fremont Avenue, Idaho Falls, ID 83402 (United States); Potirniche, Gabriel P. [Mechanical Engineering Department, University of Idaho, Moscow, ID 83844 (United States); Tokuhiro, Akira [Mechanical Engineering Department, University of Idaho, Idaho Falls, ID 83402 (United States)

    2012-09-15

    Highlights: Black-Right-Pointing-Pointer An internal state variable approach is used to predict the plastic behavior of irradiated metals. Black-Right-Pointing-Pointer The model predicts uniaxial tensile test data for irradiated 304L stainless steel. Black-Right-Pointing-Pointer The model is implemented as a user-defined material subroutine in the finite element code ABAQUS. Black-Right-Pointing-Pointer Results are compared for the unirradiated and irradiated specimens loaded in uniaxial tension. - Abstract: Neutron irradiation of metals results in decreased fracture toughness, decreased ductility, increased yield strength and increased ductile-to-brittle transition temperature. Designers use the most limiting material properties throughout the reactor vessel lifetime to determine acceptable safety margins. To reduce analysis conservatism, a new model is proposed based on an internal state variable approach for the plastic behavior of unirradiated ductile materials to support its use for analyzing irradiated materials. The proposed modeling addresses low temperature irradiation of 304L stainless steel, and predicts uniaxial tensile test data of irradiated experimental specimens. The model was implemented as a user-defined material subroutine (UMAT) in the finite element software ABAQUS. Results are compared between the unirradiated and irradiated specimens subjected to tension tests.

  12. Selecting candidate predictor variables for the modelling of post ...

    African Journals Online (AJOL)

    Objectives: The objective of this project was to determine the variables most likely to be associated with post- .... (as defined subjectively by the research team) in global .... ed on their lack of knowledge of wealth scoring tools. ... HIV serology.

  13. Variable Selection for Regression Models of Percentile Flows

    Science.gov (United States)

    Fouad, G.

    2017-12-01

    Percentile flows describe the flow magnitude equaled or exceeded for a given percent of time, and are widely used in water resource management. However, these statistics are normally unavailable since most basins are ungauged. Percentile flows of ungauged basins are often predicted using regression models based on readily observable basin characteristics, such as mean elevation. The number of these independent variables is too large to evaluate all possible models. A subset of models is typically evaluated using automatic procedures, like stepwise regression. This ignores a large variety of methods from the field of feature (variable) selection and physical understanding of percentile flows. A study of 918 basins in the United States was conducted to compare an automatic regression procedure to the following variable selection methods: (1) principal component analysis, (2) correlation analysis, (3) random forests, (4) genetic programming, (5) Bayesian networks, and (6) physical understanding. The automatic regression procedure only performed better than principal component analysis. Poor performance of the regression procedure was due to a commonly used filter for multicollinearity, which rejected the strongest models because they had cross-correlated independent variables. Multicollinearity did not decrease model performance in validation because of a representative set of calibration basins. Variable selection methods based strictly on predictive power (numbers 2-5 from above) performed similarly, likely indicating a limit to the predictive power of the variables. Similar performance was also reached using variables selected based on physical understanding, a finding that substantiates recent calls to emphasize physical understanding in modeling for predictions in ungauged basins. The strongest variables highlighted the importance of geology and land cover, whereas widely used topographic variables were the weakest predictors. Variables suffered from a high

  14. Rose bush leaf and internode expansion dynamics: analysis and development of a model capturing interplant variability

    Directory of Open Access Journals (Sweden)

    Sabine eDemotes-Mainard

    2013-10-01

    Full Text Available Bush rose architecture, among other factors, such as plant health, determines plant visual quality. The commercial product is the individual plant and interplant variability may be high within a crop. Thus, both mean plant architecture and interplant variability should be studied. Expansion is an important feature of architecture, but it has been little studied at the level of individual organs in bush roses. We investigated the expansion kinetics of primary shoot organs, to develop a model reproducing the organ expansion of real crops from non destructive input variables. We took interplant variability in expansion kinetics and the model’s ability to simulate this variability into account. Changes in leaflet and internode dimensions over thermal time were recorded for primary shoot expansion, on 83 plants from three crops grown in different climatic conditions and densities. An empirical model was developed, to reproduce organ expansion kinetics for individual plants of a real crop of bush rose primary shoots. Leaflet or internode length was simulated as a logistic function of thermal time. The model was evaluated by cross-validation. We found that differences in leaflet or internode expansion kinetics between phytomer positions and between plants at a given phytomer position were due mostly to large differences in time of organ expansion and expansion rate, rather than differences in expansion duration. Thus, in the model, the parameters linked to expansion duration were predicted by values common to all plants, whereas variability in final size and organ expansion time was captured by input data. The model accurately simulated leaflet and internode expansion for individual plants (RMSEP = 7.3% and 10.2% of final length, respectively. Thus, this study defines the measurements required to simulate expansion and provides the first model simulating organ expansion in rosebush to capture interplant variability.

  15. Defining constant versus variable phenotypic features of women with polycystic ovary syndrome using different ethnic groups and populations.

    Science.gov (United States)

    Welt, C K; Arason, G; Gudmundsson, J A; Adams, J; Palsdóttir, H; Gudlaugsdóttir, G; Ingadóttir, G; Crowley, W F

    2006-11-01

    The phenotype of women with polycystic ovary syndrome (PCOS) is variable, depending on the ethnic background. The phenotypes of women with PCOS in Iceland and Boston were compared. The study was observational with a parallel design. Subjects were studied in an outpatient setting. Women, aged 18-45 yr, with PCOS defined by hyperandrogenism and fewer than nine menses per year, were examined in Iceland (n = 105) and Boston (n = 262). PCOS subjects underwent a physical exam, fasting blood samples for androgens, gonadotropins, metabolic parameters, and a transvaginal ultrasound. The phenotype of women with PCOS was compared between Caucasian women in Iceland and Boston and among Caucasian, African-American, Hispanic, and Asian women in Boston. Androstenedione (4.0 +/- 1.3 vs. 3.5 +/- 1.2 ng/ml; P PCOS. There were no differences in fasting blood glucose, insulin, or homeostasis model assessment in body mass index-matched Caucasian subjects from Iceland or Boston or in different ethnic groups in Boston. Polycystic ovary morphology was demonstrated in 93-100% of women with PCOS in all ethnic groups. The data demonstrate differences in the reproductive features of PCOS without differences in glucose and insulin in body mass index-matched populations. These studies also suggest that measuring androstenedione is important for the documentation of hyperandrogenism in Icelandic women. Finally, polycystic ovary morphology by ultrasound is an almost universal finding in women with PCOS as defined by hyperandrogenism and irregular menses.

  16. Preliminary Multi-Variable Parametric Cost Model for Space Telescopes

    Science.gov (United States)

    Stahl, H. Philip; Hendrichs, Todd

    2010-01-01

    This slide presentation reviews creating a preliminary multi-variable cost model for the contract costs of making a space telescope. There is discussion of the methodology for collecting the data, definition of the statistical analysis methodology, single variable model results, testing of historical models and an introduction of the multi variable models.

  17. How to get rid of W: a latent variables approach to modelling spatially lagged variables

    NARCIS (Netherlands)

    Folmer, H.; Oud, J.

    2008-01-01

    In this paper we propose a structural equation model (SEM) with latent variables to model spatial dependence. Rather than using the spatial weights matrix W, we propose to use latent variables to represent spatial dependence and spillover effects, of which the observed spatially lagged variables are

  18. How to get rid of W : a latent variables approach to modelling spatially lagged variables

    NARCIS (Netherlands)

    Folmer, Henk; Oud, Johan

    2008-01-01

    In this paper we propose a structural equation model (SEM) with latent variables to model spatial dependence. Rather than using the spatial weights matrix W, we propose to use latent variables to represent spatial dependence and spillover effects, of which the observed spatially lagged variables are

  19. Internal variability of a 3-D ocean model

    Directory of Open Access Journals (Sweden)

    Bjarne Büchmann

    2016-11-01

    Full Text Available The Defence Centre for Operational Oceanography runs operational forecasts for the Danish waters. The core setup is a 60-layer baroclinic circulation model based on the General Estuarine Transport Model code. At intervals, the model setup is tuned to improve ‘model skill’ and overall performance. It has been an area of concern that the uncertainty inherent to the stochastical/chaotic nature of the model is unknown. Thus, it is difficult to state with certainty that a particular setup is improved, even if the computed model skill increases. This issue also extends to the cases, where the model is tuned during an iterative process, where model results are fed back to improve model parameters, such as bathymetry.An ensemble of identical model setups with slightly perturbed initial conditions is examined. It is found that the initial perturbation causes the models to deviate from each other exponentially fast, causing differences of several PSUs and several kelvin within a few days of simulation. The ensemble is run for a full year, and the long-term variability of salinity and temperature is found for different regions within the modelled area. Further, the developing time scale is estimated for each region, and great regional differences are found – in both variability and time scale. It is observed that periods with very high ensemble variability are typically short-term and spatially limited events.A particular event is examined in detail to shed light on how the ensemble ‘behaves’ in periods with large internal model variability. It is found that the ensemble does not seem to follow any particular stochastic distribution: both the ensemble variability (standard deviation or range as well as the ensemble distribution within that range seem to vary with time and place. Further, it is observed that a large spatial variability due to mesoscale features does not necessarily correlate to large ensemble variability. These findings bear

  20. Generalized latent variable modeling multilevel, longitudinal, and structural equation models

    CERN Document Server

    Skrondal, Anders; Rabe-Hesketh, Sophia

    2004-01-01

    This book unifies and extends latent variable models, including multilevel or generalized linear mixed models, longitudinal or panel models, item response or factor models, latent class or finite mixture models, and structural equation models.

  1. Modelling the Kampungkota: A quantitative approach in defining Indonesian informal settlements

    Science.gov (United States)

    Anindito, D. B.; Maula, F. K.; Akbar, R.

    2018-02-01

    Bandung City is home to 2.5 million inhabitants, some of which are living in slums and squatter. However, the terms conveying this type of housing is not adequate to describe that of Indonesian called as kampungkota. Several studies suggest various variables in constituting kampungkota qualitatively. This study delves to define kampungkota in a quantitative manner, using the characteristics of slums and squatter. The samples for this study are 151 villages (kelurahan) in Bandung City. Ordinary Least Squares, Geographically Weighted Regression, and Spatial Cluster and Outlier Analysis are employed. It is suggested that kampungkota may have distinguished variables regarding to its location. As kampungkota may be smaller than administrative area of kelurahan, it can develop beyond the jurisdiction of kelurahan, as indicated by the clustering pattern of kampungkota.

  2. Towards an ontological model defining the social engineering domain

    CSIR Research Space (South Africa)

    Mouton, F

    2014-08-01

    Full Text Available -1 ICT and Society IFIP Advances in Information and Communication Technology Volume 431, 2014, pp 266- 279 Towards an Ontological Model Defining the Social Engineering Domain Francois Mouton 1 , Louise Leenen 1 , Mercia M. Malan 2 , and H...

  3. A landscape model for predicting potential natural vegetation of the Olympic Peninsula USA using boundary equations and newly developed environmental variables.

    Science.gov (United States)

    Jan A. Henderson; Robin D. Lesher; David H. Peter; Chris D. Ringo

    2011-01-01

    A gradient-analysis-based model and grid-based map are presented that use the potential vegetation zone as the object of the model. Several new variables are presented that describe the environmental gradients of the landscape at different scales. Boundary algorithms are conceptualized, and then defined, that describe the environmental boundaries between vegetation...

  4. On the explaining-away phenomenon in multivariate latent variable models.

    Science.gov (United States)

    van Rijn, Peter; Rijmen, Frank

    2015-02-01

    Many probabilistic models for psychological and educational measurements contain latent variables. Well-known examples are factor analysis, item response theory, and latent class model families. We discuss what is referred to as the 'explaining-away' phenomenon in the context of such latent variable models. This phenomenon can occur when multiple latent variables are related to the same observed variable, and can elicit seemingly counterintuitive conditional dependencies between latent variables given observed variables. We illustrate the implications of explaining away for a number of well-known latent variable models by using both theoretical and real data examples. © 2014 The British Psychological Society.

  5. Defining adaptation in a generic multi layer model: CAM: The GRAPPLE Conceptual Adaptation Model

    NARCIS (Netherlands)

    Hendrix, M.; De Bra, P.M.E.; Pechenizkiy, M.; Smits, D.; Cristea, A.I.; Dillenbourg, P.; Specht, M.

    2008-01-01

    Authoring of Adaptive Hypermedia is a difficult and time consuming task. Reference models like LAOS and AHAM separate adaptation and content in different layers. Systems like AHA!, offer graphical tools based on these models to allow authors to define adaptation without knowing any adaptation

  6. Building prognostic models for breast cancer patients using clinical variables and hundreds of gene expression signatures

    Directory of Open Access Journals (Sweden)

    Liu Yufeng

    2011-01-01

    Full Text Available Abstract Background Multiple breast cancer gene expression profiles have been developed that appear to provide similar abilities to predict outcome and may outperform clinical-pathologic criteria; however, the extent to which seemingly disparate profiles provide additive prognostic information is not known, nor do we know whether prognostic profiles perform equally across clinically defined breast cancer subtypes. We evaluated whether combining the prognostic powers of standard breast cancer clinical variables with a large set of gene expression signatures could improve on our ability to predict patient outcomes. Methods Using clinical-pathological variables and a collection of 323 gene expression "modules", including 115 previously published signatures, we build multivariate Cox proportional hazards models using a dataset of 550 node-negative systemically untreated breast cancer patients. Models predictive of pathological complete response (pCR to neoadjuvant chemotherapy were also built using this approach. Results We identified statistically significant prognostic models for relapse-free survival (RFS at 7 years for the entire population, and for the subgroups of patients with ER-positive, or Luminal tumors. Furthermore, we found that combined models that included both clinical and genomic parameters improved prognostication compared with models with either clinical or genomic variables alone. Finally, we were able to build statistically significant combined models for pathological complete response (pCR predictions for the entire population. Conclusions Integration of gene expression signatures and clinical-pathological factors is an improved method over either variable type alone. Highly prognostic models could be created when using all patients, and for the subset of patients with lymph node-negative and ER-positive breast cancers. Other variables beyond gene expression and clinical-pathological variables, like gene mutation status or DNA

  7. Defined Contribution Model: Definition, Theory and an Application for Turkey

    OpenAIRE

    Metin Ercen; Deniz Gokce

    1998-01-01

    Based on a numerical application that employs social and economic parameters of the Turkish economy, this study attempts to demonstrate that the current collapse in the Turkish social security system is not unavoidable. The present social security system in Turkey is based on the defined benefit model of pension provision. On the other hand, recent proposals for reform in the social security system are based on a multipillar system, where one of the alternatives is a defined contribution pens...

  8. Linear latent variable models: the lava-package

    DEFF Research Database (Denmark)

    Holst, Klaus Kähler; Budtz-Jørgensen, Esben

    2013-01-01

    are implemented including robust standard errors for clustered correlated data, multigroup analyses, non-linear parameter constraints, inference with incomplete data, maximum likelihood estimation with censored and binary observations, and instrumental variable estimators. In addition an extensive simulation......An R package for specifying and estimating linear latent variable models is presented. The philosophy of the implementation is to separate the model specification from the actual data, which leads to a dynamic and easy way of modeling complex hierarchical structures. Several advanced features...

  9. Gait variability: methods, modeling and meaning

    Directory of Open Access Journals (Sweden)

    Hausdorff Jeffrey M

    2005-07-01

    Full Text Available Abstract The study of gait variability, the stride-to-stride fluctuations in walking, offers a complementary way of quantifying locomotion and its changes with aging and disease as well as a means of monitoring the effects of therapeutic interventions and rehabilitation. Previous work has suggested that measures of gait variability may be more closely related to falls, a serious consequence of many gait disorders, than are measures based on the mean values of other walking parameters. The Current JNER series presents nine reports on the results of recent investigations into gait variability. One novel method for collecting unconstrained, ambulatory data is reviewed, and a primer on analysis methods is presented along with a heuristic approach to summarizing variability measures. In addition, the first studies of gait variability in animal models of neurodegenerative disease are described, as is a mathematical model of human walking that characterizes certain complex (multifractal features of the motor control's pattern generator. Another investigation demonstrates that, whereas both healthy older controls and patients with a higher-level gait disorder walk more slowly in reduced lighting, only the latter's stride variability increases. Studies of the effects of dual tasks suggest that the regulation of the stride-to-stride fluctuations in stride width and stride time may be influenced by attention loading and may require cognitive input. Finally, a report of gait variability in over 500 subjects, probably the largest study of this kind, suggests how step width variability may relate to fall risk. Together, these studies provide new insights into the factors that regulate the stride-to-stride fluctuations in walking and pave the way for expanded research into the control of gait and the practical application of measures of gait variability in the clinical setting.

  10. Multi-wheat-model ensemble responses to interannual climatic variability

    DEFF Research Database (Denmark)

    Ruane, A C; Hudson, N I; Asseng, S

    2016-01-01

    We compare 27 wheat models' yield responses to interannual climate variability, analyzed at locations in Argentina, Australia, India, and The Netherlands as part of the Agricultural Model Intercomparison and Improvement Project (AgMIP) Wheat Pilot. Each model simulated 1981–2010 grain yield, and ......-term warming, suggesting that additional processes differentiate climate change impacts from observed climate variability analogs and motivating continuing analysis and model development efforts.......We compare 27 wheat models' yield responses to interannual climate variability, analyzed at locations in Argentina, Australia, India, and The Netherlands as part of the Agricultural Model Intercomparison and Improvement Project (AgMIP) Wheat Pilot. Each model simulated 1981–2010 grain yield, and we...... evaluate results against the interannual variability of growing season temperature, precipitation, and solar radiation. The amount of information used for calibration has only a minor effect on most models' climate response, and even small multi-model ensembles prove beneficial. Wheat model clusters reveal...

  11. Multi-Wheat-Model Ensemble Responses to Interannual Climate Variability

    Science.gov (United States)

    Ruane, Alex C.; Hudson, Nicholas I.; Asseng, Senthold; Camarrano, Davide; Ewert, Frank; Martre, Pierre; Boote, Kenneth J.; Thorburn, Peter J.; Aggarwal, Pramod K.; Angulo, Carlos

    2016-01-01

    We compare 27 wheat models' yield responses to interannual climate variability, analyzed at locations in Argentina, Australia, India, and The Netherlands as part of the Agricultural Model Intercomparison and Improvement Project (AgMIP) Wheat Pilot. Each model simulated 1981e2010 grain yield, and we evaluate results against the interannual variability of growing season temperature, precipitation, and solar radiation. The amount of information used for calibration has only a minor effect on most models' climate response, and even small multi-model ensembles prove beneficial. Wheat model clusters reveal common characteristics of yield response to climate; however models rarely share the same cluster at all four sites indicating substantial independence. Only a weak relationship (R2 0.24) was found between the models' sensitivities to interannual temperature variability and their response to long-termwarming, suggesting that additional processes differentiate climate change impacts from observed climate variability analogs and motivating continuing analysis and model development efforts.

  12. 47 CFR 76.1905 - Petitions to modify encoding rules for new services within defined business models.

    Science.gov (United States)

    2010-10-01

    ... services within defined business models. 76.1905 Section 76.1905 Telecommunication FEDERAL COMMUNICATIONS... Rules § 76.1905 Petitions to modify encoding rules for new services within defined business models. (a) The encoding rules for defined business models in § 76.1904 reflect the conventional methods for...

  13. Are revised models better models? A skill score assessment of regional interannual variability

    Science.gov (United States)

    Sperber, Kenneth R.; Participating AMIP Modelling Groups

    1999-05-01

    Various skill scores are used to assess the performance of revised models relative to their original configurations. The interannual variability of all-India, Sahel and Nordeste rainfall and summer monsoon windshear is examined in integrations performed under the experimental design of the Atmospheric Model Intercomparison Project. For the indices considered, the revised models exhibit greater fidelity at simulating the observed interannual variability. Interannual variability of all-India rainfall is better simulated by models that have a more realistic rainfall climatology in the vicinity of India, indicating the beneficial effect of reducing systematic model error.

  14. Stability analysis of an implicitly defined labor market model

    Science.gov (United States)

    Mendes, Diana A.; Mendes, Vivaldo M.

    2008-06-01

    Until very recently, the pervasive existence of models exhibiting well-defined backward dynamics but ill-defined forward dynamics in economics and finance has apparently posed no serious obstacles to the analysis of their dynamics and stability, despite the problems that may arise from possible erroneous conclusions regarding theoretical considerations and policy prescriptions from such models. A large number of papers have dealt with this problem in the past by assuming the existence of symmetry between forward and backward dynamics, even in the case when the map cannot be invertible either forward or backwards. However, this procedure has been seriously questioned over the last few years in a series of papers dealing with implicit difference equations and inverse limit spaces. This paper explores the search and matching labor market model developed by Bhattacharya and Bunzel [J. Bhattacharya, H. Bunzel, Chaotic Planning Solution in the Textbook Model of Equilibrium Labor Market Search and Matching, Mimeo, Iowa State University, 2002; J. Bhattacharya, H. Bunzel, Economics Bulletin 5 (19) (2003) 1-10], with the following objectives in mind: (i) to show that chaotic dynamics may still be present in the model for acceptable parameter values, (ii) to clarify some open questions related with the admissible dynamics in the forward looking setting, by providing a rigorous proof of the existence of cyclic and chaotic dynamics through the application of tools from symbolic dynamics and inverse limit theory.

  15. Coevolution of variability models and related software artifacts

    DEFF Research Database (Denmark)

    Passos, Leonardo; Teixeira, Leopoldo; Dinztner, Nicolas

    2015-01-01

    models coevolve with other artifact types, we study a large and complex real-world variant-rich software system: the Linux kernel. Specifically, we extract variability-coevolution patterns capturing changes in the variability model of the Linux kernel with subsequent changes in Makefiles and C source...

  16. Variable thickness transient ground-water flow model. Volume 3. Program listings

    International Nuclear Information System (INIS)

    Reisenauer, A.E.

    1979-12-01

    The Assessment of Effectiveness of Geologic Isolation Systems (AEGIS) Program is developing and applying the methodology for assessing the far-field, long-term post-closure safety of deep geologic nuclear waste repositories. AEGIS is being performed by Pacific Northwest Laboratory (PNL) under contract with the Office of Nuclear Waste Isolation (OWNI) for the Department of Energy (DOE). One task within AEGIS is the development of methodology for analysis of the consequences (water pathway) from loss of repository containment as defined by various release scenarios. Analysis of the long-term, far-field consequences of release scenarios requires the application of numerical codes which simulate the hydrologic systems, model the transport of released radionuclides through the hydrologic systems to the biosphere, and, where applicable, assess the radiological dose to humans. Hydrologic and transport models are available at several levels of complexity or sophistication. Model selection and use are determined by the quantity and quality of input data. Model development under AEGIS and related programs provides three levels of hydrologic models, two levels of transport models, and one level of dose models (with several separate models). This is the third of 3 volumes of the description of the VTT (Variable Thickness Transient) Groundwater Hydrologic Model - second level (intermediate complexity) two-dimensional saturated groundwater flow

  17. Variable selection in Logistic regression model with genetic algorithm.

    Science.gov (United States)

    Zhang, Zhongheng; Trevino, Victor; Hoseini, Sayed Shahabuddin; Belciug, Smaranda; Boopathi, Arumugam Manivanna; Zhang, Ping; Gorunescu, Florin; Subha, Velappan; Dai, Songshi

    2018-02-01

    Variable or feature selection is one of the most important steps in model specification. Especially in the case of medical-decision making, the direct use of a medical database, without a previous analysis and preprocessing step, is often counterproductive. In this way, the variable selection represents the method of choosing the most relevant attributes from the database in order to build a robust learning models and, thus, to improve the performance of the models used in the decision process. In biomedical research, the purpose of variable selection is to select clinically important and statistically significant variables, while excluding unrelated or noise variables. A variety of methods exist for variable selection, but none of them is without limitations. For example, the stepwise approach, which is highly used, adds the best variable in each cycle generally producing an acceptable set of variables. Nevertheless, it is limited by the fact that it commonly trapped in local optima. The best subset approach can systematically search the entire covariate pattern space, but the solution pool can be extremely large with tens to hundreds of variables, which is the case in nowadays clinical data. Genetic algorithms (GA) are heuristic optimization approaches and can be used for variable selection in multivariable regression models. This tutorial paper aims to provide a step-by-step approach to the use of GA in variable selection. The R code provided in the text can be extended and adapted to other data analysis needs.

  18. On the ""early-time"" evolution of variables relevant to turbulence models for Rayleigh-Taylor instability

    Energy Technology Data Exchange (ETDEWEB)

    Rollin, Bertrand [Los Alamos National Laboratory; Andrews, Malcolm J [Los Alamos National Laboratory

    2010-01-01

    We present our progress toward setting initial conditions in variable density turbulence models. In particular, we concentrate our efforts on the BHR turbulence model for turbulent Rayleigh-Taylor instability. Our approach is to predict profiles of relevant parameters before the fully turbulent regime and use them as initial conditions for the turbulence model. We use an idealized model of the mixing between two interpenetrating fluids to define the initial profiles for the turbulence model parameters. Velocities and volume fractions used in the idealized mixing model are obtained respectively from a set of ordinary differential equations modeling the growth of the Rayleigh-Taylor instability and from an idealization of the density profile in the mixing layer. A comparison between predicted initial profiles for the turbulence model parameters and initial profiles of the parameters obtained from low Atwood number three dimensional simulations show reasonable agreement.

  19. Does internal variability change in response to global warming? A large ensemble modelling study of tropical rainfall

    Science.gov (United States)

    Milinski, S.; Bader, J.; Jungclaus, J. H.; Marotzke, J.

    2017-12-01

    There is some consensus on mean state changes of rainfall under global warming; changes of the internal variability, on the other hand, are more difficult to analyse and have not been discussed as much despite their importance for understanding changes in extreme events, such as droughts or floodings. We analyse changes in the rainfall variability in the tropical Atlantic region. We use a 100-member ensemble of historical (1850-2005) model simulations with the Max Planck Institute for Meteorology Earth System Model (MPI-ESM1) to identify changes of internal rainfall variability. To investigate the effects of global warming on the internal variability, we employ an additional ensemble of model simulations with stronger external forcing (1% CO2-increase per year, same integration length as the historical simulations) with 68 ensemble members. The focus of our study is on the oceanic Atlantic ITCZ. We find that the internal variability of rainfall over the tropical Atlantic does change due to global warming and that these changes in variability are larger than changes in the mean state in some regions. From splitting the total variance into patterns of variability, we see that the variability on the southern flank of the ITCZ becomes more dominant, i.e. explaining a larger fraction of the total variance in a warmer climate. In agreement with previous studies, we find that changes in the mean state show an increase and narrowing of the ITCZ. The large ensembles allow us to do a statistically robust differentiation between the changes in variability that can be explained by internal variability and those that can be attributed to the external forcing. Furthermore, we argue that internal variability in a transient climate is only well defined in the ensemble domain and not in the temporal domain, which requires the use of a large ensemble.

  20. End-to-end Information Flow Security Model for Software-Defined Networks

    Directory of Open Access Journals (Sweden)

    D. Ju. Chaly

    2015-01-01

    Full Text Available Software-defined networks (SDN are a novel paradigm of networking which became an enabler technology for many modern applications such as network virtualization, policy-based access control and many others. Software can provide flexibility and fast-paced innovations in the networking; however, it has a complex nature. In this connection there is an increasing necessity of means for assuring its correctness and security. Abstract models for SDN can tackle these challenges. This paper addresses to confidentiality and some integrity properties of SDNs. These are critical properties for multi-tenant SDN environments, since the network management software must ensure that no confidential data of one tenant are leaked to other tenants in spite of using the same physical infrastructure. We define a notion of end-to-end security in context of software-defined networks and propose a semantic model where the reasoning is possible about confidentiality, and we can check that confidential information flows do not interfere with non-confidential ones. We show that the model can be extended in order to reason about networks with secure and insecure links which can arise, for example, in wireless environments.The article is published in the authors’ wording.

  1. Improved variable reduction in partial least squares modelling by Global-Minimum Error Uninformative-Variable Elimination.

    Science.gov (United States)

    Andries, Jan P M; Vander Heyden, Yvan; Buydens, Lutgarde M C

    2017-08-22

    The calibration performance of Partial Least Squares regression (PLS) can be improved by eliminating uninformative variables. For PLS, many variable elimination methods have been developed. One is the Uninformative-Variable Elimination for PLS (UVE-PLS). However, the number of variables retained by UVE-PLS is usually still large. In UVE-PLS, variable elimination is repeated as long as the root mean squared error of cross validation (RMSECV) is decreasing. The set of variables in this first local minimum is retained. In this paper, a modification of UVE-PLS is proposed and investigated, in which UVE is repeated until no further reduction in variables is possible, followed by a search for the global RMSECV minimum. The method is called Global-Minimum Error Uninformative-Variable Elimination for PLS, denoted as GME-UVE-PLS or simply GME-UVE. After each iteration, the predictive ability of the PLS model, built with the remaining variable set, is assessed by RMSECV. The variable set with the global RMSECV minimum is then finally selected. The goal is to obtain smaller sets of variables with similar or improved predictability than those from the classical UVE-PLS method. The performance of the GME-UVE-PLS method is investigated using four data sets, i.e. a simulated set, NIR and NMR spectra, and a theoretical molecular descriptors set, resulting in twelve profile-response (X-y) calibrations. The selective and predictive performances of the models resulting from GME-UVE-PLS are statistically compared to those from UVE-PLS and 1-step UVE, one-sided paired t-tests. The results demonstrate that variable reduction with the proposed GME-UVE-PLS method, usually eliminates significantly more variables than the classical UVE-PLS, while the predictive abilities of the resulting models are better. With GME-UVE-PLS, a lower number of uninformative variables, without a chemical meaning for the response, may be retained than with UVE-PLS. The selectivity of the classical UVE method

  2. Optimization of the Actuarial Model of Defined Contribution Pension Plan

    Directory of Open Access Journals (Sweden)

    Yan Li

    2014-01-01

    Full Text Available The paper focuses on the actuarial models of defined contribution pension plan. Through assumptions and calculations, the expected replacement ratios of three different defined contribution pension plans are compared. Specially, more significant considerable factors are put forward in the further cost and risk analyses. In order to get an assessment of current status, the paper finds a relationship between the replacement ratio and the pension investment rate using econometrics method. Based on an appropriate investment rate of 6%, an expected replacement ratio of 20% is reached.

  3. From Transition Systems to Variability Models and from Lifted Model Checking Back to UPPAAL

    DEFF Research Database (Denmark)

    Dimovski, Aleksandar; Wasowski, Andrzej

    2017-01-01

    efficient lifted (family-based) model checking for real-time variability models. This reduces the cost of maintaining specialized family-based real-time model checkers. Real-time variability models can be model checked using the standard UPPAAL. We have implemented abstractions as syntactic source...

  4. Identifying Variability in Mental Models Within and Between Disciplines Caring for the Cardiac Surgical Patient.

    Science.gov (United States)

    Brown, Evans K H; Harder, Kathleen A; Apostolidou, Ioanna; Wahr, Joyce A; Shook, Douglas C; Farivar, R Saeid; Perry, Tjorvi E; Konia, Mojca R

    2017-07-01

    The cardiac operating room is a complex environment requiring efficient and effective communication between multiple disciplines. The objectives of this study were to identify and rank critical time points during the perioperative care of cardiac surgical patients, and to assess variability in responses, as a correlate of a shared mental model, regarding the importance of these time points between and within disciplines. Using Delphi technique methodology, panelists from 3 institutions were tasked with developing a list of critical time points, which were subsequently assigned to pause point (PP) categories. Panelists then rated these PPs on a 100-point visual analog scale. Descriptive statistics were expressed as percentages, medians, and interquartile ranges (IQRs). We defined low response variability between panelists as an IQR ≤ 20, moderate response variability as an IQR > 20 and ≤ 40, and high response variability as an IQR > 40. Panelists identified a total of 12 PPs. The PPs identified by the highest number of panelists were (1) before surgical incision, (2) before aortic cannulation, (3) before cardiopulmonary bypass (CPB) initiation, (4) before CPB separation, and (5) at time of transfer of care from operating room (OR) to intensive care unit (ICU) staff. There was low variability among panelists' ratings of the PP "before surgical incision," moderate response variability for the PPs "before separation from CPB," "before transfer from OR table to bed," and "at time of transfer of care from OR to ICU staff," and high response variability for the remaining 8 PPs. In addition, the perceived importance of each of these PPs varies between disciplines and between institutions. Cardiac surgical providers recognize distinct critical time points during cardiac surgery. However, there is a high degree of variability within and between disciplines as to the importance of these times, suggesting an absence of a shared mental model among disciplines caring for

  5. Preliminary Multi-Variable Cost Model for Space Telescopes

    Science.gov (United States)

    Stahl, H. Philip; Hendrichs, Todd

    2010-01-01

    Parametric cost models are routinely used to plan missions, compare concepts and justify technology investments. This paper reviews the methodology used to develop space telescope cost models; summarizes recently published single variable models; and presents preliminary results for two and three variable cost models. Some of the findings are that increasing mass reduces cost; it costs less per square meter of collecting aperture to build a large telescope than a small telescope; and technology development as a function of time reduces cost at the rate of 50% per 17 years.

  6. The demand-control model for job strain: a commentary on different ways to operationalize the exposure variable

    Directory of Open Access Journals (Sweden)

    Márcia Guimarães de Mello Alves

    2015-01-01

    Full Text Available Demand-control has been the most widely used model to study job strain in various countries. However, researchers have used the model differently, thus hindering the comparison of results. Such heterogeneity appears in both the study instrument used and in the definition of the main exposure variable - high strain. This cross-sectional study aimed to assess differences between various ways of operationalizing job strain through association with prevalent hypertension in a cohort of workers (Pro-Health Study. No difference in the association between high job strain and hypertension was found according to the different ways of operationalizing exposure, even though prevalence varied widely, according to the adopted form, from 19.6% for quadrants to 42% for subtraction tertile. The authors recommend further studies to define the cutoff for exposure variables using combined subjective and objective data.

  7. Analysis models for variables associated with breastfeeding duration

    Directory of Open Access Journals (Sweden)

    Edson Theodoro dos S. Neto

    2013-09-01

    Full Text Available OBJECTIVE To analyze the factors associated with breastfeeding duration by two statistical models. METHODS A population-based cohort study was conducted with 86 mothers and newborns from two areas primary covered by the National Health System, with high rates of infant mortality in Vitória, Espírito Santo, Brazil. During 30 months, 67 (78% children and mothers were visited seven times at home by trained interviewers, who filled out survey forms. Data on food and sucking habits, socioeconomic and maternal characteristics were collected. Variables were analyzed by Cox regression models, considering duration of breastfeeding as the dependent variable, and logistic regression (dependent variables, was the presence of a breastfeeding child in different post-natal ages. RESULTS In the logistic regression model, the pacifier sucking (adjusted Odds Ratio: 3.4; 95%CI 1.2-9.55 and bottle feeding (adjusted Odds Ratio: 4.4; 95%CI 1.6-12.1 increased the chance of weaning a child before one year of age. Variables associated to breastfeeding duration in the Cox regression model were: pacifier sucking (adjusted Hazard Ratio 2.0; 95%CI 1.2-3.3 and bottle feeding (adjusted Hazard Ratio 2.0; 95%CI 1.2-3.5. However, protective factors (maternal age and family income differed between both models. CONCLUSIONS Risk and protective factors associated with cessation of breastfeeding may be analyzed by different models of statistical regression. Cox Regression Models are adequate to analyze such factors in longitudinal studies.

  8. Bayesian approach to errors-in-variables in regression models

    Science.gov (United States)

    Rozliman, Nur Aainaa; Ibrahim, Adriana Irawati Nur; Yunus, Rossita Mohammad

    2017-05-01

    In many applications and experiments, data sets are often contaminated with error or mismeasured covariates. When at least one of the covariates in a model is measured with error, Errors-in-Variables (EIV) model can be used. Measurement error, when not corrected, would cause misleading statistical inferences and analysis. Therefore, our goal is to examine the relationship of the outcome variable and the unobserved exposure variable given the observed mismeasured surrogate by applying the Bayesian formulation to the EIV model. We shall extend the flexible parametric method proposed by Hossain and Gustafson (2009) to another nonlinear regression model which is the Poisson regression model. We shall then illustrate the application of this approach via a simulation study using Markov chain Monte Carlo sampling methods.

  9. Understanding and forecasting polar stratospheric variability with statistical models

    Directory of Open Access Journals (Sweden)

    C. Blume

    2012-07-01

    Full Text Available The variability of the north-polar stratospheric vortex is a prominent aspect of the middle atmosphere. This work investigates a wide class of statistical models with respect to their ability to model geopotential and temperature anomalies, representing variability in the polar stratosphere. Four partly nonstationary, nonlinear models are assessed: linear discriminant analysis (LDA; a cluster method based on finite elements (FEM-VARX; a neural network, namely the multi-layer perceptron (MLP; and support vector regression (SVR. These methods model time series by incorporating all significant external factors simultaneously, including ENSO, QBO, the solar cycle, volcanoes, to then quantify their statistical importance. We show that variability in reanalysis data from 1980 to 2005 is successfully modeled. The period from 2005 to 2011 can be hindcasted to a certain extent, where MLP performs significantly better than the remaining models. However, variability remains that cannot be statistically hindcasted within the current framework, such as the unexpected major warming in January 2009. Finally, the statistical model with the best generalization performance is used to predict a winter 2011/12 with warm and weak vortex conditions. A vortex breakdown is predicted for late January, early February 2012.

  10. Gaussian Mixture Model of Heart Rate Variability

    Science.gov (United States)

    Costa, Tommaso; Boccignone, Giuseppe; Ferraro, Mario

    2012-01-01

    Heart rate variability (HRV) is an important measure of sympathetic and parasympathetic functions of the autonomic nervous system and a key indicator of cardiovascular condition. This paper proposes a novel method to investigate HRV, namely by modelling it as a linear combination of Gaussians. Results show that three Gaussians are enough to describe the stationary statistics of heart variability and to provide a straightforward interpretation of the HRV power spectrum. Comparisons have been made also with synthetic data generated from different physiologically based models showing the plausibility of the Gaussian mixture parameters. PMID:22666386

  11. Verification of models for ballistic movement time and endpoint variability.

    Science.gov (United States)

    Lin, Ray F; Drury, Colin G

    2013-01-01

    A hand control movement is composed of several ballistic movements. The time required in performing a ballistic movement and its endpoint variability are two important properties in developing movement models. The purpose of this study was to test potential models for predicting these two properties. Twelve participants conducted ballistic movements of specific amplitudes using a drawing tablet. The measured data of movement time and endpoint variability were then used to verify the models. This study was successful with Hoffmann and Gan's movement time model (Hoffmann, 1981; Gan and Hoffmann 1988) predicting more than 90.7% data variance for 84 individual measurements. A new theoretically developed ballistic movement variability model, proved to be better than Howarth, Beggs, and Bowden's (1971) model, predicting on average 84.8% of stopping-variable error and 88.3% of aiming-variable errors. These two validated models will help build solid theoretical movement models and evaluate input devices. This article provides better models for predicting end accuracy and movement time of ballistic movements that are desirable in rapid aiming tasks, such as keying in numbers on a smart phone. The models allow better design of aiming tasks, for example button sizes on mobile phones for different user populations.

  12. A hydrochemical modelling framework for combined assessment of spatial and temporal variability in stream chemistry: application to Plynlimon, Wales

    Directory of Open Access Journals (Sweden)

    H.J. Foster

    2001-01-01

    Full Text Available Recent concern about the risk to biota from acidification in upland areas, due to air pollution and land-use change (such as the planting of coniferous forests, has generated a need to model catchment hydro-chemistry to assess environmental risk and define protection strategies. Previous approaches have tended to concentrate on quantifying either spatial variability at a regional scale or temporal variability at a given location. However, to protect biota from ‘acid episodes’, an assessment of both temporal and spatial variability of stream chemistry is required at a catchment scale. In addition, quantification of temporal variability needs to represent both episodic event response and long term variability caused by deposition and/or land-use change. Both spatial and temporal variability in streamwater chemistry are considered in a new modelling methodology based on application to the Plynlimon catchments, central Wales. A two-component End-Member Mixing Analysis (EMMA is used whereby low and high flow chemistry are taken to represent ‘groundwater’ and ‘soil water’ end-members. The conventional EMMA method is extended to incorporate spatial variability in the two end-members across the catchments by quantifying the Acid Neutralisation Capacity (ANC of each in terms of a statistical distribution. These are then input as stochastic variables to a two-component mixing model, thereby accounting for variability of ANC both spatially and temporally. The model is coupled to a long-term acidification model (MAGIC to predict the evolution of the end members and, hence, the response to future scenarios. The results can be plotted as a function of time and space, which enables better assessment of the likely effects of pollution deposition or land-use changes in the future on the stream chemistry than current methods which use catchment average values. The model is also a useful basis for further research into linkage between hydrochemistry

  13. Evaluation of Brace Treatment for Infant Hip Dislocation in a Prospective Cohort: Defining the Success Rate and Variables Associated with Failure.

    Science.gov (United States)

    Upasani, Vidyadhar V; Bomar, James D; Matheney, Travis H; Sankar, Wudbhav N; Mulpuri, Kishore; Price, Charles T; Moseley, Colin F; Kelley, Simon P; Narayanan, Unni; Clarke, Nicholas M P; Wedge, John H; Castañeda, Pablo; Kasser, James R; Foster, Bruce K; Herrera-Soto, Jose A; Cundy, Peter J; Williams, Nicole; Mubarak, Scott J

    2016-07-20

    The use of a brace has been shown to be an effective treatment for hip dislocation in infants; however, previous studies of such treatment have been single-center or retrospective. The purpose of the current study was to evaluate the success rate for brace use in the treatment of infant hip dislocation in an international, multicenter, prospective cohort, and to identify the variables associated with brace failure. All dislocations were verified with use of ultrasound or radiography prior to the initiation of treatment, and patients were followed prospectively for a minimum of 18 months. Successful treatment was defined as the use of a brace that resulted in a clinically and radiographically reduced hip, without surgical intervention. The Mann-Whitney test, chi-square analysis, and Fisher exact test were used to identify risk factors for brace failure. A multivariate logistic regression model was used to determine the probability of brace failure according to the risk factors identified. Brace treatment was successful in 162 (79%) of the 204 dislocated hips in this series. Six variables were found to be significant risk factors for failure: developing femoral nerve palsy during brace treatment (p = 0.001), treatment with a static brace (p failure, whereas hips with 4 or 5 risk factors had a 100% probability of failure. These data provide valuable information for patient families and their providers regarding the important variables that influence successful brace treatment for dislocated hips in infants. Prognostic Level I. See Instructions for Authors for a complete description of levels of evidence. Copyright © 2016 by The Journal of Bone and Joint Surgery, Incorporated.

  14. Galactic models with variable spiral structure

    International Nuclear Information System (INIS)

    James, R.A.; Sellwood, J.A.

    1978-01-01

    A series of three-dimensional computer simulations of disc galaxies has been run in which the self-consistent potential of the disc stars is supplemented by that arising from a small uniform Population II sphere. The models show variable spiral structure, which is more pronounced for thin discs. In addition, the thin discs form weak bars. In one case variable spiral structure associated with this bar has been seen. The relaxed discs are cool outside resonance regions. (author)

  15. Implementation of a user defined mine blast model in LSDYNA

    NARCIS (Netherlands)

    Tyler-Street, M.; Leerdam, P.J.C.

    2012-01-01

    A user defined mine blast model has been developed and implemented into the explicit finite element code LS-DYNA to provide a numerically efficient method for simulating an antivehicular mine blast. The objective is to provide a simple and robust numerical method which is able to represent both the

  16. A variable-order fractal derivative model for anomalous diffusion

    Directory of Open Access Journals (Sweden)

    Liu Xiaoting

    2017-01-01

    Full Text Available This paper pays attention to develop a variable-order fractal derivative model for anomalous diffusion. Previous investigations have indicated that the medium structure, fractal dimension or porosity may change with time or space during solute transport processes, results in time or spatial dependent anomalous diffusion phenomena. Hereby, this study makes an attempt to introduce a variable-order fractal derivative diffusion model, in which the index of fractal derivative depends on temporal moment or spatial position, to characterize the above mentioned anomalous diffusion (or transport processes. Compared with other models, the main advantages in description and the physical explanation of new model are explored by numerical simulation. Further discussions on the dissimilitude such as computational efficiency, diffusion behavior and heavy tail phenomena of the new model and variable-order fractional derivative model are also offered.

  17. DEFINING AND CONSTRUCTING THE TEACHING MODEL OF ENTREPRENEUR EDUCATION BASED ON ENTREPRENEURIAL INTENTION MODEL

    OpenAIRE

    Henry Pribadi

    2005-01-01

    Concept of entrepreneurship has been widely debated whether to be an entrepreneur one need to get formal entrepreneurial education or not. Most of the formal entrepreneur education yield the same flaw, which is the lack of teaching soft skill and building the necessary entrepreneurship characteristics. Intention-based models of entrepreneurship education try to fill the gap by focusing the education on the human intention of becoming entrepreneur by defining four model of entrepreneurship edu...

  18. Higher-dimensional cosmological model with variable gravitational ...

    Indian Academy of Sciences (India)

    We have studied five-dimensional homogeneous cosmological models with variable and bulk viscosity in Lyra geometry. Exact solutions for the field equations have been obtained and physical properties of the models are discussed. It has been observed that the results of new models are well within the observational ...

  19. Multi-scale climate modelling over Southern Africa using a variable-resolution global model

    CSIR Research Space (South Africa)

    Engelbrecht, FA

    2011-12-01

    Full Text Available -mail: fengelbrecht@csir.co.za Multi-scale climate modelling over Southern Africa using a variable-resolution global model FA Engelbrecht1, 2*, WA Landman1, 3, CJ Engelbrecht4, S Landman5, MM Bopape1, B Roux6, JL McGregor7 and M Thatcher7 1 CSIR Natural... improvement. Keywords: multi-scale climate modelling, variable-resolution atmospheric model Introduction Dynamic climate models have become the primary tools for the projection of future climate change, at both the global and regional scales. Dynamic...

  20. Modelling the co-evolution of indirect genetic effects and inherited variability.

    Science.gov (United States)

    Marjanovic, Jovana; Mulder, Han A; Rönnegård, Lars; Bijma, Piter

    2018-03-28

    When individuals interact, their phenotypes may be affected not only by their own genes but also by genes in their social partners. This phenomenon is known as Indirect Genetic Effects (IGEs). In aquaculture species and some plants, however, competition not only affects trait levels of individuals, but also inflates variability of trait values among individuals. In the field of quantitative genetics, the variability of trait values has been studied as a quantitative trait in itself, and is often referred to as inherited variability. Such studies, however, consider only the genetic effect of the focal individual on trait variability and do not make a connection to competition. Although the observed phenotypic relationship between competition and variability suggests an underlying genetic relationship, the current quantitative genetic models of IGE and inherited variability do not allow for such a relationship. The lack of quantitative genetic models that connect IGEs to inherited variability limits our understanding of the potential of variability to respond to selection, both in nature and agriculture. Models of trait levels, for example, show that IGEs may considerably change heritable variation in trait values. Currently, we lack the tools to investigate whether this result extends to variability of trait values. Here we present a model that integrates IGEs and inherited variability. In this model, the target phenotype, say growth rate, is a function of the genetic and environmental effects of the focal individual and of the difference in trait value between the social partner and the focal individual, multiplied by a regression coefficient. The regression coefficient is a genetic trait, which is a measure of cooperation; a negative value indicates competition, a positive value cooperation, and an increasing value due to selection indicates the evolution of cooperation. In contrast to the existing quantitative genetic models, our model allows for co-evolution of

  1. Variable Renewable Energy in Long-Term Planning Models: A Multi-Model Perspective

    Energy Technology Data Exchange (ETDEWEB)

    Cole, Wesley [National Renewable Energy Lab. (NREL), Golden, CO (United States); Frew, Bethany [National Renewable Energy Lab. (NREL), Golden, CO (United States); Mai, Trieu [National Renewable Energy Lab. (NREL), Golden, CO (United States); Sun, Yinong [National Renewable Energy Lab. (NREL), Golden, CO (United States); Bistline, John [Electric Power Research Inst. (EPRI), Knoxville, TN (United States); Blanford, Geoffrey [Electric Power Research Inst. (EPRI), Knoxville, TN (United States); Young, David [Electric Power Research Inst. (EPRI), Knoxville, TN (United States); Marcy, Cara [U.S. Energy Information Administration, Washington, DC (United States); Namovicz, Chris [U.S. Energy Information Administration, Washington, DC (United States); Edelman, Risa [US Environmental Protection Agency (EPA), Washington, DC (United States); Meroney, Bill [US Environmental Protection Agency (EPA), Washington, DC (United States); Sims, Ryan [US Environmental Protection Agency (EPA), Washington, DC (United States); Stenhouse, Jeb [US Environmental Protection Agency (EPA), Washington, DC (United States); Donohoo-Vallett, Paul [Dept. of Energy (DOE), Washington DC (United States)

    2017-11-01

    Long-term capacity expansion models of the U.S. electricity sector have long been used to inform electric sector stakeholders and decision-makers. With the recent surge in variable renewable energy (VRE) generators — primarily wind and solar photovoltaics — the need to appropriately represent VRE generators in these long-term models has increased. VRE generators are especially difficult to represent for a variety of reasons, including their variability, uncertainty, and spatial diversity. This report summarizes the analyses and model experiments that were conducted as part of two workshops on modeling VRE for national-scale capacity expansion models. It discusses the various methods for treating VRE among four modeling teams from the Electric Power Research Institute (EPRI), the U.S. Energy Information Administration (EIA), the U.S. Environmental Protection Agency (EPA), and the National Renewable Energy Laboratory (NREL). The report reviews the findings from the two workshops and emphasizes the areas where there is still need for additional research and development on analysis tools to incorporate VRE into long-term planning and decision-making. This research is intended to inform the energy modeling community on the modeling of variable renewable resources, and is not intended to advocate for or against any particular energy technologies, resources, or policies.

  2. Sources and Impacts of Modeled and Observed Low-Frequency Climate Variability

    Science.gov (United States)

    Parsons, Luke Alexander

    Here we analyze climate variability using instrumental, paleoclimate (proxy), and the latest climate model data to understand more about the sources and impacts of low-frequency climate variability. Understanding the drivers of climate variability at interannual to century timescales is important for studies of climate change, including analyses of detection and attribution of climate change impacts. Additionally, correctly modeling the sources and impacts of variability is key to the simulation of abrupt change (Alley et al., 2003) and extended drought (Seager et al., 2005; Pelletier and Turcotte, 1997; Ault et al., 2014). In Appendix A, we employ an Earth system model (GFDL-ESM2M) simulation to study the impacts of a weakening of the Atlantic meridional overturning circulation (AMOC) on the climate of the American Tropics. The AMOC drives some degree of local and global internal low-frequency climate variability (Manabe and Stouffer, 1995; Thornalley et al., 2009) and helps control the position of the tropical rainfall belt (Zhang and Delworth, 2005). We find that a major weakening of the AMOC can cause large-scale temperature, precipitation, and carbon storage changes in Central and South America. Our results suggest that possible future changes in AMOC strength alone will not be sufficient to drive a large-scale dieback of the Amazonian forest, but this key natural ecosystem is sensitive to dry-season length and timing of rainfall (Parsons et al., 2014). In Appendix B, we compare a paleoclimate record of precipitation variability in the Peruvian Amazon to climate model precipitation variability. The paleoclimate (Lake Limon) record indicates that precipitation variability in western Amazonia is 'red' (i.e., increasing variability with timescale). By contrast, most state-of-the-art climate models indicate precipitation variability in this region is nearly 'white' (i.e., equally variability across timescales). This paleo-model disagreement in the overall

  3. The necessity of connection structures in neural models of variable binding.

    Science.gov (United States)

    van der Velde, Frank; de Kamps, Marc

    2015-08-01

    In his review of neural binding problems, Feldman (Cogn Neurodyn 7:1-11, 2013) addressed two types of models as solutions of (novel) variable binding. The one type uses labels such as phase synchrony of activation. The other ('connectivity based') type uses dedicated connections structures to achieve novel variable binding. Feldman argued that label (synchrony) based models are the only possible candidates to handle novel variable binding, whereas connectivity based models lack the flexibility required for that. We argue and illustrate that Feldman's analysis is incorrect. Contrary to his conclusion, connectivity based models are the only viable candidates for models of novel variable binding because they are the only type of models that can produce behavior. We will show that the label (synchrony) based models analyzed by Feldman are in fact examples of connectivity based models. Feldman's analysis that novel variable binding can be achieved without existing connection structures seems to result from analyzing the binding problem in a wrong frame of reference, in particular in an outside instead of the required inside frame of reference. Connectivity based models can be models of novel variable binding when they possess a connection structure that resembles a small-world network, as found in the brain. We will illustrate binding with this type of model with episode binding and the binding of words, including novel words, in sentence structures.

  4. A Non-Gaussian Spatial Generalized Linear Latent Variable Model

    KAUST Repository

    Irincheeva, Irina; Cantoni, Eva; Genton, Marc G.

    2012-01-01

    We consider a spatial generalized linear latent variable model with and without normality distributional assumption on the latent variables. When the latent variables are assumed to be multivariate normal, we apply a Laplace approximation. To relax the assumption of marginal normality in favor of a mixture of normals, we construct a multivariate density with Gaussian spatial dependence and given multivariate margins. We use the pairwise likelihood to estimate the corresponding spatial generalized linear latent variable model. The properties of the resulting estimators are explored by simulations. In the analysis of an air pollution data set the proposed methodology uncovers weather conditions to be a more important source of variability than air pollution in explaining all the causes of non-accidental mortality excluding accidents. © 2012 International Biometric Society.

  5. A Non-Gaussian Spatial Generalized Linear Latent Variable Model

    KAUST Repository

    Irincheeva, Irina

    2012-08-03

    We consider a spatial generalized linear latent variable model with and without normality distributional assumption on the latent variables. When the latent variables are assumed to be multivariate normal, we apply a Laplace approximation. To relax the assumption of marginal normality in favor of a mixture of normals, we construct a multivariate density with Gaussian spatial dependence and given multivariate margins. We use the pairwise likelihood to estimate the corresponding spatial generalized linear latent variable model. The properties of the resulting estimators are explored by simulations. In the analysis of an air pollution data set the proposed methodology uncovers weather conditions to be a more important source of variability than air pollution in explaining all the causes of non-accidental mortality excluding accidents. © 2012 International Biometric Society.

  6. Climatological variability in regional air pollution

    International Nuclear Information System (INIS)

    Shannon, J.D.; Trexler, E.C. Jr.

    1995-01-01

    Although some air pollution modeling studies examine events that have already occurred (e.g., the Chernobyl plume) with relevant meteorological conditions largely known, most pollution modeling studies address expected or potential scenarios for the future. Future meteorological conditions, the major pollutant forcing function other than emissions, are inherently uncertain although much relevant information is contained in past observational data. For convenience in our discussions of regional pollutant variability unrelated to emission changes, we define meteorological variability as short-term (within-season) pollutant variability and climatological variability as year-to-year changes in seasonal averages and accumulations of pollutant variables. In observations and in some of our simulations the effects are confounded because for seasons of two different years both the mean and the within-season character of a pollutant variable may change. Effects of climatological and meteorological variability on means and distributions of air pollution parameters, particularly those related to regional visibility, are illustrated. Over periods of up to a decade climatological variability may mask or overstate improvements resulting from emission controls. The importance of including climatological uncertainties in assessing potential policies, particularly when based partly on calculated source-receptor relationships, is highlighted

  7. Improved variable reduction in partial least squares modelling based on predictive-property-ranked variables and adaptation of partial least squares complexity.

    Science.gov (United States)

    Andries, Jan P M; Vander Heyden, Yvan; Buydens, Lutgarde M C

    2011-10-31

    The calibration performance of partial least squares for one response variable (PLS1) can be improved by elimination of uninformative variables. Many methods are based on so-called predictive variable properties, which are functions of various PLS-model parameters, and which may change during the variable reduction process. In these methods variable reduction is made on the variables ranked in descending order for a given variable property. The methods start with full spectrum modelling. Iteratively, until a specified number of remaining variables is reached, the variable with the smallest property value is eliminated; a new PLS model is calculated, followed by a renewed ranking of the variables. The Stepwise Variable Reduction methods using Predictive-Property-Ranked Variables are denoted as SVR-PPRV. In the existing SVR-PPRV methods the PLS model complexity is kept constant during the variable reduction process. In this study, three new SVR-PPRV methods are proposed, in which a possibility for decreasing the PLS model complexity during the variable reduction process is build in. Therefore we denote our methods as PPRVR-CAM methods (Predictive-Property-Ranked Variable Reduction with Complexity Adapted Models). The selective and predictive abilities of the new methods are investigated and tested, using the absolute PLS regression coefficients as predictive property. They were compared with two modifications of existing SVR-PPRV methods (with constant PLS model complexity) and with two reference methods: uninformative variable elimination followed by either a genetic algorithm for PLS (UVE-GA-PLS) or an interval PLS (UVE-iPLS). The performance of the methods is investigated in conjunction with two data sets from near-infrared sources (NIR) and one simulated set. The selective and predictive performances of the variable reduction methods are compared statistically using the Wilcoxon signed rank test. The three newly developed PPRVR-CAM methods were able to retain

  8. Modeling of carbon sequestration in coal-beds: A variable saturated simulation

    International Nuclear Information System (INIS)

    Liu Guoxiang; Smirnov, Andrei V.

    2008-01-01

    Storage of carbon dioxide in deep coal seams is a profitable method to reduce the concentration of green house gases in the atmosphere while the methane as a byproduct can be extracted during carbon dioxide injection into the coal seam. In this procedure, the key element is to keep carbon dioxide in the coal seam without escaping for a long term. It is depended on many factors such as properties of coal basin, fracture state, phase equilibrium, etc., especially the porosity, permeability and saturation of the coal seam. In this paper, a variable saturation model was developed to predict the capacity of carbon dioxide sequestration and coal-bed methane recovery. This variable saturation model can be used to track the saturation variability with the partial pressures change caused by carbon dioxide injection. Saturation variability is a key factor to predict the capacity of carbon dioxide storage and methane recovery. Based on this variable saturation model, a set of related variables including capillary pressure, relative permeability, porosity, coupled adsorption model, concentration and temperature equations were solved. From results of the simulation, historical data agree with the variable saturation model as well as the adsorption model constructed by Langmuir equations. The Appalachian basin, as an example, modeled the carbon dioxide sequestration in this paper. The results of the study and the developed models can provide the projections for the CO 2 sequestration and methane recovery in coal-beds within different regional specifics

  9. Mediterranean climate modelling: variability and climate change scenarios

    International Nuclear Information System (INIS)

    Somot, S.

    2005-12-01

    Air-sea fluxes, open-sea deep convection and cyclo-genesis are studied in the Mediterranean with the development of a regional coupled model (AORCM). It accurately simulates these processes and their climate variabilities are quantified and studied. The regional coupling shows a significant impact on the number of winter intense cyclo-genesis as well as on associated air-sea fluxes and precipitation. A lower inter-annual variability than in non-coupled models is simulated for fluxes and deep convection. The feedbacks driving this variability are understood. The climate change response is then analysed for the 21. century with the non-coupled models: cyclo-genesis decreases, associated precipitation increases in spring and autumn and decreases in summer. Moreover, a warming and salting of the Mediterranean as well as a strong weakening of its thermohaline circulation occur. This study also concludes with the necessity of using AORCMs to assess climate change impacts on the Mediterranean. (author)

  10. Predictive and Descriptive CoMFA Models: The Effect of Variable Selection.

    Science.gov (United States)

    Sepehri, Bakhtyar; Omidikia, Nematollah; Kompany-Zareh, Mohsen; Ghavami, Raouf

    2018-01-01

    Aims & Scope: In this research, 8 variable selection approaches were used to investigate the effect of variable selection on the predictive power and stability of CoMFA models. Three data sets including 36 EPAC antagonists, 79 CD38 inhibitors and 57 ATAD2 bromodomain inhibitors were modelled by CoMFA. First of all, for all three data sets, CoMFA models with all CoMFA descriptors were created then by applying each variable selection method a new CoMFA model was developed so for each data set, 9 CoMFA models were built. Obtained results show noisy and uninformative variables affect CoMFA results. Based on created models, applying 5 variable selection approaches including FFD, SRD-FFD, IVE-PLS, SRD-UVEPLS and SPA-jackknife increases the predictive power and stability of CoMFA models significantly. Among them, SPA-jackknife removes most of the variables while FFD retains most of them. FFD and IVE-PLS are time consuming process while SRD-FFD and SRD-UVE-PLS run need to few seconds. Also applying FFD, SRD-FFD, IVE-PLS, SRD-UVE-PLS protect CoMFA countor maps information for both fields. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  11. Variable-Structure Control of a Model Glider Airplane

    Science.gov (United States)

    Waszak, Martin R.; Anderson, Mark R.

    2008-01-01

    A variable-structure control system designed to enable a fuselage-heavy airplane to recover from spin has been demonstrated in a hand-launched, instrumented model glider airplane. Variable-structure control is a high-speed switching feedback control technique that has been developed for control of nonlinear dynamic systems.

  12. DEFINING RECOVERY GOALS AND STRATEGIES FOR ENDANGERED SPECIES USING SPATIALLY-EXPLICIT POPULATION MODELS

    Science.gov (United States)

    We used a spatially explicit population model of wolves (Canis lupus) to propose a framework for defining rangewide recovery priorities and finer-scale strategies for regional reintroductions. The model predicts that Yellowstone and central Idaho, where wolves have recently been ...

  13. Computational Fluid Dynamics Modeling of a Supersonic Nozzle and Integration into a Variable Cycle Engine Model

    Science.gov (United States)

    Connolly, Joseph W.; Friedlander, David; Kopasakis, George

    2015-01-01

    This paper covers the development of an integrated nonlinear dynamic simulation for a variable cycle turbofan engine and nozzle that can be integrated with an overall vehicle Aero-Propulso-Servo-Elastic (APSE) model. A previously developed variable cycle turbofan engine model is used for this study and is enhanced here to include variable guide vanes allowing for operation across the supersonic flight regime. The primary focus of this study is to improve the fidelity of the model's thrust response by replacing the simple choked flow equation convergent-divergent nozzle model with a MacCormack method based quasi-1D model. The dynamic response of the nozzle model using the MacCormack method is verified by comparing it against a model of the nozzle using the conservation element/solution element method. A methodology is also presented for the integration of the MacCormack nozzle model with the variable cycle engine.

  14. Analytical Model for LLC Resonant Converter With Variable Duty-Cycle Control

    DEFF Research Database (Denmark)

    Shen, Yanfeng; Wang, Huai; Blaabjerg, Frede

    2016-01-01

    are identified and discussed. The proposed model enables a better understanding of the operation characteristics and fast parameter design of the LLC converter, which otherwise cannot be achieved by the existing simulation based methods and numerical models. The results obtained from the proposed model......In LLC resonant converters, the variable duty-cycle control is usually combined with a variable frequency control to widen the gain range, improve the light-load efficiency, or suppress the inrush current during start-up. However, a proper analytical model for the variable duty-cycle controlled LLC...... converter is still not available due to the complexity of operation modes and the nonlinearity of steady-state equations. This paper makes the efforts to develop an analytical model for the LLC converter with variable duty-cycle control. All possible operation models and critical operation characteristics...

  15. An observational and modeling study of the regional impacts of climate variability

    Science.gov (United States)

    Horton, Radley M.

    during El Nino events. Based on the results from Chapter One, the analysis is expanded in several ways in Chapter Two. To gain a more complete and statistically meaningful understanding of ENSO, a 25 year time period is used instead of a single event. To gain a fuller understanding of climate variability, additional patterns are analyzed. Finally analysis is conducted at the regional scales that are of interest to farmers and agricultural planners. Key findings are that GISS ModelE can reproduce: (1) the spatial pattern associated with two additional related modes, the Arctic Oscillation (AO) and the North Atlantic Oscillation (NAO); (2) rainfall patterns in Indonesia; and (3) dynamical features such as sea level pressure (SLP) gradients and wind in the study regions. When run in coupled mode, the same model reproduces similar modes spatially but with reduced variance and weak teleconnections. Since Chapter Two identified Western Indonesia as the region where GCMs hold the most promise for agricultural applications, in Chapter Three a finer spatial and temporal scale analysis of ENSO's effects is presented. Agricultural decision-making is also linked to ENSO's climate effects. Early rainy season precipitation and circulation, and same-season planting and harvesting dates, are shown to be sensitive to ENSO. The locus of ENSO convergence and rainfall anomalies is shown to be near the axis of rainy season establishment, defined as the 6--8 mm/day isohyet, an approximate threshold for irrigated rice cultivation. As the axis tracks south and east between October and January, so do ENSO anomalies. Circulation anomalies associated with ENSO are shown to be similar to those associated with rainfall anomalies, suggesting that long lead-time ENSO forecasts may allow more adaptation than 'wait and see' methods, with little loss of forecast skill. Additional findings include: (1) rice and corn yields are lower (higher) during dry (wet) trimesters and El Nino (La Nina) years; and (2

  16. A model for AGN variability on multiple time-scales

    Science.gov (United States)

    Sartori, Lia F.; Schawinski, Kevin; Trakhtenbrot, Benny; Caplar, Neven; Treister, Ezequiel; Koss, Michael J.; Urry, C. Megan; Zhang, C. E.

    2018-05-01

    We present a framework to link and describe active galactic nuclei (AGN) variability on a wide range of time-scales, from days to billions of years. In particular, we concentrate on the AGN variability features related to changes in black hole fuelling and accretion rate. In our framework, the variability features observed in different AGN at different time-scales may be explained as realisations of the same underlying statistical properties. In this context, we propose a model to simulate the evolution of AGN light curves with time based on the probability density function (PDF) and power spectral density (PSD) of the Eddington ratio (L/LEdd) distribution. Motivated by general galaxy population properties, we propose that the PDF may be inspired by the L/LEdd distribution function (ERDF), and that a single (or limited number of) ERDF+PSD set may explain all observed variability features. After outlining the framework and the model, we compile a set of variability measurements in terms of structure function (SF) and magnitude difference. We then combine the variability measurements on a SF plot ranging from days to Gyr. The proposed framework enables constraints on the underlying PSD and the ability to link AGN variability on different time-scales, therefore providing new insights into AGN variability and black hole growth phenomena.

  17. Evaluating measurement of dynamic constructs: defining a measurement model of derivatives.

    Science.gov (United States)

    Estabrook, Ryne

    2015-03-01

    While measurement evaluation has been embraced as an important step in psychological research, evaluating measurement structures with longitudinal data is fraught with limitations. This article defines and tests a measurement model of derivatives (MMOD), which is designed to assess the measurement structure of latent constructs both for analyses of between-person differences and for the analysis of change. Simulation results indicate that MMOD outperforms existing models for multivariate analysis and provides equivalent fit to data generation models. Additional simulations show MMOD capable of detecting differences in between-person and within-person factor structures. Model features, applications, and future directions are discussed. (c) 2015 APA, all rights reserved).

  18. Classification criteria of syndromes by latent variable models

    DEFF Research Database (Denmark)

    Petersen, Janne

    2010-01-01

    patient's characteristics. These methods may erroneously reduce multiplicity either by combining markers of different phenotypes or by mixing HALS with other processes such as aging. Latent class models identify homogenous groups of patients based on sets of variables, for example symptoms. As no gold......The thesis has two parts; one clinical part: studying the dimensions of human immunodeficiency virus associated lipodystrophy syndrome (HALS) by latent class models, and a more statistical part: investigating how to predict scores of latent variables so these can be used in subsequent regression...... standard exists for diagnosing HALS the normally applied diagnostic models cannot be used. Latent class models, which have never before been used to diagnose HALS, make it possible, under certain assumptions, to: statistically evaluate the number of phenotypes, test for mixing of HALS with other processes...

  19. Defining Essential Biodiversity Variables (EBVs) as a contribution to Essential Ocean Variables (EOVs): A Core Task of the Marine Biodiversity Observation Network (MBON) to Accelerate Integration of Biological Observations in the Global Ocean Observing System (GOOS)

    Science.gov (United States)

    Pearlman, J.; Muller-Karger, F. E.; Sousa Pinto, I.; Costello, M. J.; Duffy, J. E.; Appeltans, W.; Fischer, A. S.; Canonico, G.; Klein, E.; Obura, D.; Montes, E.; Miloslavich, P.; Howard, M.

    2017-12-01

    The Marine Biodiversity Observation Network (MBON) is a networking effort under the umbrella of the Group on Earth Observations Biodiversity Observation Network (GEO BON). The objective of the MBON is to link existing groups engaged in ocean observation and help define practical indices to deploy in an operational manner to track changes in the number of marine species, the abundance and biomass of marine organisms, the diverse interactions between organisms and the environment, and the variability and change of specific habitats of interest. MBON serves as the biodiversity arm of Blue Planet, the initiative of the Group on Earth Observations (GEO) for the benefit of society. The Global Ocean Observing System (GOOS) was established under the auspices of the Intergovernmental Oceanographic Commission (IOC) in 1991 to organize international ocean observing efforts. The mission of the GOOS is to support monitoring to improve the management of marine and coastal ecosystems and resources, and to enable scientific research. GOOS is engaged in a continuing, rigorous process of identifying Essential Ocean Variables (EOVs). MBON is working with GOOS and the Ocean Biogeographic Information System (OBIS, also under the IOC) to define Essential Biodiversity Variables (EBVs) as those Essential Ocean Variables (EOVs) that have explicit taxonomic records associated with them. For practical purposes, EBVs are a subset of the EOVs. The focus is to promote the integration of biological EOVs including EBVs into the existing and planned national and international ocean observing systems. The definition avoids a proliferation of 'essential' variables across multiple organizations. MBON will continue to advance practical and wide use of EBVs and related EOV. This is an effective way to contribute to several UN assessments (e.g., from IPBES, IPCC, and the World Ocean Assessment under the UN Regular Process), UN Sustainable Development Goals, and to address targets and goals defined under

  20. Fixed transaction costs and modelling limited dependent variables

    NARCIS (Netherlands)

    Hempenius, A.L.

    1994-01-01

    As an alternative to the Tobit model, for vectors of limited dependent variables, I suggest a model, which follows from explicitly using fixed costs, if appropriate of course, in the utility function of the decision-maker.

  1. A Step-Indexed Kripke Model of Hidden State via Recursive Properties on Recursively Defined Metric Spaces

    DEFF Research Database (Denmark)

    Schwinghammer, Jan; Birkedal, Lars; Støvring, Kristian

    2011-01-01

    ´eraud and Pottier’s type and capability system including both frame and anti-frame rules. The model is a possible worlds model based on the operational semantics and step-indexed heap relations, and the worlds are constructed as a recursively defined predicate on a recursively defined metric space. We also extend...

  2. Defining pharmacy and its practice: a conceptual model for an international audience

    Directory of Open Access Journals (Sweden)

    Scahill SL

    2017-05-01

    Full Text Available SL Scahill,1 M Atif,2 ZU Babar3,4 1School of Management, Massey Business School, Massey University, Albany, Auckland, New Zealand; 2Pharmacy School, The Islamia University of Bahawalpur, Bahawalpur, Pakistan; 3School of Pharmacy, University of Huddersfield, Huddersfield, England, UK; 4School of Pharmacy, Faculty of Medical and Health Sciences, University of Auckland, Auckland, New Zealand Background: There is much fragmentation and little consensus in the use of descriptors for the different disciplines that make up the pharmacy sector. Globalization, reprofessionalization and the influx of other disciplines means there is a requirement for a greater degree of standardization. This has not been well addressed in the pharmacy practice research and education literature. Objectives: To identify and define the various subdisciplines of the pharmacy sector and integrate them into an internationally relevant conceptual model based on narrative synthesis of the literature. Methods: A literature review was undertaken to understand the fragmentation in dialogue surrounding definitions relating to concepts and practices in the context of the pharmacy sector. From a synthesis of this literature, the need for this model was justified. Key assumptions of the model were identified, and an organic process of development took place with the three authors engaging in a process of sense-making to theorize the model. Results: The model is “fit for purpose” across multiple countries and includes two components making up the umbrella term “pharmaceutical practice”. The first component is the four conceptual dimensions, which outline the disciplines including social and administrative sciences, community pharmacy, clinical pharmacy and pharmaceutical sciences. The second component of the model describes the “acts of practice”: teaching, research and professional advocacy; service and academic enterprise. Conclusions: This model aims to expose issues

  3. Unified models of interactions with gauge-invariant variables

    International Nuclear Information System (INIS)

    Zet, Gheorghe

    2000-01-01

    A model of gauge theory is formulated in terms of gauge-invariant variables over a 4-dimensional space-time. Namely, we define a metric tensor g μν ( μ , ν = 0,1,2,3) starting with the components F μν a and F μν a tilde of the tensor associated to the Yang-Mills fields and its dual: g μν = 1/(3Δ 1/3 ) (ε abc F μα a F αβ b tilde F βν c ). Here Δ is a scale factor which can be chosen of a convenient form so that the theory may be self-dual or not. The components g μν are interpreted as new gauge-invariant variables. The model is applied to the case when the gauge group is SU(2). For the space-time we choose two different manifolds: (i) the space-time is R x S 3 , where R is the real line and S 3 is the three-dimensional sphere; (ii) the space-time is endowed with axial symmetry. We calculate the components g μν of the new metric for the two cases in terms of SU(2) gauge potentials. Imposing the supplementary condition that the new metric coincides with the initial metric of the space-time, we obtain the field equations (of the first order in derivatives) for the gauge fields. In addition, we determine the scale factor Δ which is introduced in the definition of g μν to ensure the property of self-duality for our SU(2) gauge theory, namely, 1/(2√g)(ε αβστ g μα g νβ F στ a = F μν a , g = det (g μν ). In the case (i) we show that the space-time R x S 3 is not compatible with a self-dual SU(2) gauge theory, but in the case (ii) the condition of self-duality is satisfied. The model developed in our work can be considered as a possible way to unification of general relativity and Yang-Mills theories. This means that the gauge theory can be formulated in the close analogy with the general relativity, i.e. the Yang-Mills equations are equivalent to Einstein equations with the right-hand side of a simple form. (authors)

  4. Sparse modeling of spatial environmental variables associated with asthma.

    Science.gov (United States)

    Chang, Timothy S; Gangnon, Ronald E; David Page, C; Buckingham, William R; Tandias, Aman; Cowan, Kelly J; Tomasallo, Carrie D; Arndt, Brian G; Hanrahan, Lawrence P; Guilbert, Theresa W

    2015-02-01

    Geographically distributed environmental factors influence the burden of diseases such as asthma. Our objective was to identify sparse environmental variables associated with asthma diagnosis gathered from a large electronic health record (EHR) dataset while controlling for spatial variation. An EHR dataset from the University of Wisconsin's Family Medicine, Internal Medicine and Pediatrics Departments was obtained for 199,220 patients aged 5-50years over a three-year period. Each patient's home address was geocoded to one of 3456 geographic census block groups. Over one thousand block group variables were obtained from a commercial database. We developed a Sparse Spatial Environmental Analysis (SASEA). Using this method, the environmental variables were first dimensionally reduced with sparse principal component analysis. Logistic thin plate regression spline modeling was then used to identify block group variables associated with asthma from sparse principal components. The addresses of patients from the EHR dataset were distributed throughout the majority of Wisconsin's geography. Logistic thin plate regression spline modeling captured spatial variation of asthma. Four sparse principal components identified via model selection consisted of food at home, dog ownership, household size, and disposable income variables. In rural areas, dog ownership and renter occupied housing units from significant sparse principal components were associated with asthma. Our main contribution is the incorporation of sparsity in spatial modeling. SASEA sequentially added sparse principal components to Logistic thin plate regression spline modeling. This method allowed association of geographically distributed environmental factors with asthma using EHR and environmental datasets. SASEA can be applied to other diseases with environmental risk factors. Copyright © 2014 Elsevier Inc. All rights reserved.

  5. Evaluating two model reduction approaches for large scale hedonic models sensitive to omitted variables and multicollinearity

    DEFF Research Database (Denmark)

    Panduro, Toke Emil; Thorsen, Bo Jellesmark

    2014-01-01

    Hedonic models in environmental valuation studies have grown in terms of number of transactions and number of explanatory variables. We focus on the practical challenge of model reduction, when aiming for reliable parsimonious models, sensitive to omitted variable bias and multicollinearity. We...

  6. Defining the key-parameters of insurance product in Islamic insurance

    Directory of Open Access Journals (Sweden)

    Galim Zaribzyanovich Vakhitov

    2015-06-01

    Full Text Available Objective to define the range of actuarial calculations in Islamic insurance to study the main differences of the traditional and Islamic insurance to define what changes in calculations entail the above differences. Methods mathematical modeling probabilistic analysis of insurance risks adaptation of methods of actuarial mathematics to the principles of Islamic insurance. Results the mathematical form of the takafulfund models is presented the distribution is analyzed of a random variable of the resulting insurance fund or the insurance company balance in a particular fixed insurance portfolio. Scientific novelty calculation are presented of the optimal tariff rate in takaful. Islamic insurance is an innovative area of insurance industry. Actuarial calculations that meet the Sharia rules are still being developed. The authors set the new tasks of actuarial calculations including the specified changes in the calculation of the optimal tariff rate imposed by the Islamic insurance principles. Practical value the results obtained can be used in the actuarial calculations of the Islamic insurance companies. nbsp

  7. Modelling of the metabolism of Zymomonas mobilis growing on a defined medium

    Energy Technology Data Exchange (ETDEWEB)

    Posten, C

    1989-08-07

    A structured model of Zymomonas mobilis is presented using fermentation data of a defined aspartate medium. After some remarks on the structure of the metabolism the model is derived by considering sub-models, e.g. balance equations, and by identifying the unknown parameters separately for each sub-model. Some results are the elemental composition of Zymomonas mobilis, a description of the substrate uptake during substrate limitation and the growth inhibition during substrate saturation. The results are shown as simulation and are discussed in relation to the inhibitory effect of ethanol on the bacterial cell. (orig.).

  8. Change in intraindividual variability over time as a key metric for defining performance-based cognitive fatigability.

    Science.gov (United States)

    Wang, Chao; Ding, Mingzhou; Kluger, Benzi M

    2014-03-01

    Cognitive fatigability is conventionally quantified as the increase over time in either mean reaction time (RT) or error rate from two or more time periods during sustained performance of a prolonged cognitive task. There is evidence indicating that these mean performance measures may not sufficiently reflect the response characteristics of cognitive fatigue. We hypothesized that changes in intraindividual variability over time would be a more sensitive and ecologically meaningful metric for investigations of fatigability of cognitive performance. To test the hypothesis fifteen young adults were recruited. Trait fatigue perceptions in various domains were assessed with the Multidimensional Fatigue Index (MFI). Behavioral data were then recorded during performance of a three-hour continuous cued Stroop task. Results showed that intraindividual variability, as quantified by the coefficient of variation of RT, increased linearly over the course of three hours and demonstrated a significantly greater effect size than mean RT or accuracy. Change in intraindividual RT variability over time was significantly correlated with relevant subscores of the MFI including reduced activity, reduced motivation and mental fatigue. While change in mean RT over time was also correlated with reduced motivation and mental fatigue, these correlations were significantly smaller than those associated with intraindividual RT variability. RT distribution analysis using an ex-Gaussian model further revealed that change in intraindividual variability over time reflects an increase in the exponential component of variance and may reflect attentional lapses or other breakdowns in cognitive control. These results suggest that intraindividual variability and its change over time provide important metrics for measuring cognitive fatigability and may prove useful for inferring the underlying neuronal mechanisms of both perceptions of fatigue and objective changes in performance. Copyright © 2014

  9. Variability aware compact model characterization for statistical circuit design optimization

    Science.gov (United States)

    Qiao, Ying; Qian, Kun; Spanos, Costas J.

    2012-03-01

    Variability modeling at the compact transistor model level can enable statistically optimized designs in view of limitations imposed by the fabrication technology. In this work we propose an efficient variabilityaware compact model characterization methodology based on the linear propagation of variance. Hierarchical spatial variability patterns of selected compact model parameters are directly calculated from transistor array test structures. This methodology has been implemented and tested using transistor I-V measurements and the EKV-EPFL compact model. Calculation results compare well to full-wafer direct model parameter extractions. Further studies are done on the proper selection of both compact model parameters and electrical measurement metrics used in the method.

  10. Interannual modes of variability of Southern Hemisphere atmospheric circulation in CMIP3 models

    International Nuclear Information System (INIS)

    Grainger, S; Frederiksen, C S; Zheng, X

    2010-01-01

    The atmospheric circulation acts as a bridge between large-scale sources of climate variability, and climate variability on regional scales. Here a statistical method is applied to monthly mean Southern Hemisphere 500hPa geopotential height to separate the interannual variability of the seasonal mean into intraseasonal and slowly varying (time scales of a season or longer) components. Intraseasonal and slow modes of variability are estimated from realisations of models from the Coupled Model Intercomparison Project Phase 3 (CMIP3) twentieth century coupled climate simulation (20c3m) and are evaluated against those estimated from reanalysis data. The intraseasonal modes of variability are generally well reproduced across all CMIP3 20c3m models for both Southern Hemisphere summer and winter. The slow modes are in general less well reproduced than the intraseasonal modes, and there are larger differences between realisations than for the intraseasonal modes. New diagnostics are proposed to evaluate model variability. It is found that differences between realisations from each model are generally less than inter-model differences. Differences between model-mean diagnostics are found. The results obtained are applicable to assessing the reliability of changes in atmospheric circulation variability in CMIP3 models and for their suitability for further studies of regional climate variability.

  11. A novel methodology improves reservoir characterization models using geologic fuzzy variables

    Energy Technology Data Exchange (ETDEWEB)

    Soto B, Rodolfo [DIGITOIL, Maracaibo (Venezuela); Soto O, David A. [Texas A and M University, College Station, TX (United States)

    2004-07-01

    One of the research projects carried out in Cusiana field to explain its rapid decline during the last years was to get better permeability models. The reservoir of this field has a complex layered system that it is not easy to model using conventional methods. The new technique included the development of porosity and permeability maps from cored wells following the same trend of the sand depositions for each facie or layer according to the sedimentary facie and the depositional system models. Then, we used fuzzy logic to reproduce those maps in three dimensions as geologic fuzzy variables. After multivariate statistical and factor analyses, we found independence and a good correlation coefficient between the geologic fuzzy variables and core permeability and porosity. This means, the geologic fuzzy variable could explain the fabric, the grain size and the pore geometry of the reservoir rock trough the field. Finally, we developed a neural network permeability model using porosity, gamma ray and the geologic fuzzy variable as input variables. This model has a cross-correlation coefficient of 0.873 and average absolute error of 33% compared with the actual model with a correlation coefficient of 0.511 and absolute error greater than 250%. We tested different methodologies, but this new one showed dramatically be a promiser way to get better permeability models. The use of the models have had a high impact in the explanation of well performance and workovers, and reservoir simulation models. (author)

  12. Plasticity models of material variability based on uncertainty quantification techniques

    Energy Technology Data Exchange (ETDEWEB)

    Jones, Reese E. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Rizzi, Francesco [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Boyce, Brad [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Templeton, Jeremy Alan [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Ostien, Jakob [Sandia National Lab. (SNL-CA), Livermore, CA (United States)

    2017-11-01

    The advent of fabrication techniques like additive manufacturing has focused attention on the considerable variability of material response due to defects and other micro-structural aspects. This variability motivates the development of an enhanced design methodology that incorporates inherent material variability to provide robust predictions of performance. In this work, we develop plasticity models capable of representing the distribution of mechanical responses observed in experiments using traditional plasticity models of the mean response and recently developed uncertainty quantification (UQ) techniques. Lastly, we demonstrate that the new method provides predictive realizations that are superior to more traditional ones, and how these UQ techniques can be used in model selection and assessing the quality of calibrated physical parameters.

  13. Efficient Business Service Consumption by Customization with Variability Modelling

    Directory of Open Access Journals (Sweden)

    Michael Stollberg

    2010-07-01

    Full Text Available The establishment of service orientation in industry determines the need for efficient engineering technologies that properly support the whole life cycle of service provision and consumption. A central challenge is adequate support for the efficient employment of komplex services in their individual application context. This becomes particularly important for large-scale enterprise technologies where generic services are designed for reuse in several business scenarios. In this article we complement our work regarding Service Variability Modelling presented in a previous publication. There we presented an approach for the customization of services for individual application contexts by creating simplified variants, based on model-driven variability management. That work presents our revised service variability metamodel, new features of the variability tools and an applicability study, which reveals that substantial improvements on the efficiency of standard business service consumption under both usability and economic aspects can be achieved.

  14. Heterogeneity in chronic fatigue syndrome - empirically defined subgroups from the PACE trial.

    Science.gov (United States)

    Williams, T E; Chalder, T; Sharpe, M; White, P D

    2017-06-01

    Chronic fatigue syndrome is likely to be a heterogeneous condition. Previous studies have empirically defined subgroups using combinations of clinical and biological variables. We aimed to explore the heterogeneity of chronic fatigue syndrome. We used baseline data from the PACE trial, which included 640 participants with chronic fatigue syndrome. Variable reduction, using a combination of clinical knowledge and principal component analyses, produced a final dataset of 26 variables for 541 patients. Latent class analysis was then used to empirically define subgroups. The most statistically significant and clinically recognizable model comprised five subgroups. The largest, 'core' subgroup (33% of participants), had relatively low scores across all domains and good self-efficacy. A further three subgroups were defined by: the presence of mood disorders (21%); the presence of features of other functional somatic syndromes (such as fibromyalgia or irritable bowel syndrome) (21%); or by many symptoms - a group which combined features of both of the above (14%). The smallest 'avoidant-inactive' subgroup was characterized by physical inactivity, belief that symptoms were entirely physical in nature, and fear that they indicated harm (11%). Differences in the severity of fatigue and disability provided some discriminative validation of the subgroups. In addition to providing further evidence for the heterogeneity of chronic fatigue syndrome, the subgroups identified may aid future research into the important aetiological factors of specific subtypes of chronic fatigue syndrome and the development of more personalized treatment approaches.

  15. Evaluation of standardized and applied variables in predicting treatment outcomes of polytrauma patients.

    Science.gov (United States)

    Aksamija, Goran; Mulabdic, Adi; Rasic, Ismar; Muhovic, Samir; Gavric, Igor

    2011-01-01

    Polytrauma is defined as an injury where they are affected by at least two different organ systems or body, with at least one life-threatening injuries. Given the multilevel model care of polytrauma patients within KCUS are inevitable weaknesses in the management of this category of patients. To determine the dynamics of existing procedures in treatment of polytrauma patients on admission to KCUS, and based on statistical analysis of variables applied to determine and define the factors that influence the final outcome of treatment, and determine their mutual relationship, which may result in eliminating the flaws in the approach to the problem. The study was based on 263 polytrauma patients. Parametric and non-parametric statistical methods were used. Basic statistics were calculated, based on the calculated parameters for the final achievement of research objectives, multicoleration analysis, image analysis, discriminant analysis and multifactorial analysis were used. From the universe of variables for this study we selected sample of n = 25 variables, of which the first two modular, others belong to the common measurement space (n = 23) and in this paper defined as a system variable methods, procedures and assessments of polytrauma patients. After the multicoleration analysis, since the image analysis gave a reliable measurement results, we started the analysis of eigenvalues, that is defining the factors upon which they obtain information about the system solve the problem of the existing model and its correlation with treatment outcome. The study singled out the essential factors that determine the current organizational model of care, which may affect the treatment and better outcome of polytrauma patients. This analysis has shown the maximum correlative relationships between these practices and contributed to development guidelines that are defined by isolated factors.

  16. Defining enthesitis in spondyloarthritis by ultrasound

    DEFF Research Database (Denmark)

    Terslev, Lene; Naredo, E; Iagnocco, A

    2014-01-01

    Objective: To standardize ultrasound (US) in enthesitis. Methods: An Initial Delphi exercise was undertaken to define US detected enthesitis and its core components. These definitions were subsequently tested on static images taken from Spondyloarthritis (SpA) patients in order to evaluate...... elementary component. On static images the intra-observer reliability showed a high degree of variability for the detection of elementary lesions with kappa coefficients ranging from 0.14 - 1. The inter-observer kappa value was variable with the lowest kappa for enthesophytes (0.24) and the best for Doppler...... activity at the enthesis (0.63). Conclusion: This is the first consensus based definition of US enthesitis and its elementary components and the first step performed to ensure a higher degree of homogeneity and comparability of results between studies and in daily clinical work. Defining Enthesitis...

  17. Generalized Network Psychometrics : Combining Network and Latent Variable Models

    NARCIS (Netherlands)

    Epskamp, S.; Rhemtulla, M.; Borsboom, D.

    2017-01-01

    We introduce the network model as a formal psychometric model, conceptualizing the covariance between psychometric indicators as resulting from pairwise interactions between observable variables in a network structure. This contrasts with standard psychometric models, in which the covariance between

  18. Tropospheric Ozone Assessment Report: Assessment of global-scale model performance for global and regional ozone distributions, variability, and trends

    Directory of Open Access Journals (Sweden)

    P. J. Young

    2018-01-01

    Full Text Available The goal of the Tropospheric Ozone Assessment Report (TOAR is to provide the research community with an up-to-date scientific assessment of tropospheric ozone, from the surface to the tropopause. While a suite of observations provides significant information on the spatial and temporal distribution of tropospheric ozone, observational gaps make it necessary to use global atmospheric chemistry models to synthesize our understanding of the processes and variables that control tropospheric ozone abundance and its variability. Models facilitate the interpretation of the observations and allow us to make projections of future tropospheric ozone and trace gas distributions for different anthropogenic or natural perturbations. This paper assesses the skill of current-generation global atmospheric chemistry models in simulating the observed present-day tropospheric ozone distribution, variability, and trends. Drawing upon the results of recent international multi-model intercomparisons and using a range of model evaluation techniques, we demonstrate that global chemistry models are broadly skillful in capturing the spatio-temporal variations of tropospheric ozone over the seasonal cycle, for extreme pollution episodes, and changes over interannual to decadal periods. However, models are consistently biased high in the northern hemisphere and biased low in the southern hemisphere, throughout the depth of the troposphere, and are unable to replicate particular metrics that define the longer term trends in tropospheric ozone as derived from some background sites. When the models compare unfavorably against observations, we discuss the potential causes of model biases and propose directions for future developments, including improved evaluations that may be able to better diagnose the root cause of the model-observation disparity. Overall, model results should be approached critically, including determining whether the model performance is acceptable for

  19. Internal variability in a regional climate model over West Africa

    Energy Technology Data Exchange (ETDEWEB)

    Vanvyve, Emilie; Ypersele, Jean-Pascal van [Universite catholique de Louvain, Institut d' astronomie et de geophysique Georges Lemaitre, Louvain-la-Neuve (Belgium); Hall, Nicholas [Laboratoire d' Etudes en Geophysique et Oceanographie Spatiales/Centre National d' Etudes Spatiales, Toulouse Cedex 9 (France); Messager, Christophe [University of Leeds, Institute for Atmospheric Science, Environment, School of Earth and Environment, Leeds (United Kingdom); Leroux, Stephanie [Universite Joseph Fourier, Laboratoire d' etude des Transferts en Hydrologie et Environnement, BP53, Grenoble Cedex 9 (France)

    2008-02-15

    Sensitivity studies with regional climate models are often performed on the basis of a few simulations for which the difference is analysed and the statistical significance is often taken for granted. In this study we present some simple measures of the confidence limits for these types of experiments by analysing the internal variability of a regional climate model run over West Africa. Two 1-year long simulations, differing only in their initial conditions, are compared. The difference between the two runs gives a measure of the internal variability of the model and an indication of which timescales are reliable for analysis. The results are analysed for a range of timescales and spatial scales, and quantitative measures of the confidence limits for regional model simulations are diagnosed for a selection of study areas for rainfall, low level temperature and wind. As the averaging period or spatial scale is increased, the signal due to internal variability gets smaller and confidence in the simulations increases. This occurs more rapidly for variations in precipitation, which appear essentially random, than for dynamical variables, which show some organisation on larger scales. (orig.)

  20. Impulsive synchronization and parameter mismatch of the three-variable autocatalator model

    International Nuclear Information System (INIS)

    Li, Yang; Liao, Xiaofeng; Li, Chuandong; Huang, Tingwen; Yang, Degang

    2007-01-01

    The synchronization problems of the three-variable autocatalator model via impulsive control approach are investigated; several theorems on the stability of impulsive control systems are also investigated. These theorems are then used to find the conditions under which the three-variable autocatalator model can be asymptotically controlled to the equilibrium point. This Letter derives some sufficient conditions for the stabilization and synchronization of a three-variable autocatalator model via impulsive control with varying impulsive intervals. Furthermore, we address the chaos quasi-synchronization in the presence of single-parameter mismatch. To illustrate the effectiveness of the new scheme, several numerical examples are given

  1. Variable cycle control model for intersection based on multi-source information

    Science.gov (United States)

    Sun, Zhi-Yuan; Li, Yue; Qu, Wen-Cong; Chen, Yan-Yan

    2018-05-01

    In order to improve the efficiency of traffic control system in the era of big data, a new variable cycle control model based on multi-source information is presented for intersection in this paper. Firstly, with consideration of multi-source information, a unified framework based on cyber-physical system is proposed. Secondly, taking into account the variable length of cell, hysteresis phenomenon of traffic flow and the characteristics of lane group, a Lane group-based Cell Transmission Model is established to describe the physical properties of traffic flow under different traffic signal control schemes. Thirdly, the variable cycle control problem is abstracted into a bi-level programming model. The upper level model is put forward for cycle length optimization considering traffic capacity and delay. The lower level model is a dynamic signal control decision model based on fairness analysis. Then, a Hybrid Intelligent Optimization Algorithm is raised to solve the proposed model. Finally, a case study shows the efficiency and applicability of the proposed model and algorithm.

  2. Hidden Markov latent variable models with multivariate longitudinal data.

    Science.gov (United States)

    Song, Xinyuan; Xia, Yemao; Zhu, Hongtu

    2017-03-01

    Cocaine addiction is chronic and persistent, and has become a major social and health problem in many countries. Existing studies have shown that cocaine addicts often undergo episodic periods of addiction to, moderate dependence on, or swearing off cocaine. Given its reversible feature, cocaine use can be formulated as a stochastic process that transits from one state to another, while the impacts of various factors, such as treatment received and individuals' psychological problems on cocaine use, may vary across states. This article develops a hidden Markov latent variable model to study multivariate longitudinal data concerning cocaine use from a California Civil Addict Program. The proposed model generalizes conventional latent variable models to allow bidirectional transition between cocaine-addiction states and conventional hidden Markov models to allow latent variables and their dynamic interrelationship. We develop a maximum-likelihood approach, along with a Monte Carlo expectation conditional maximization (MCECM) algorithm, to conduct parameter estimation. The asymptotic properties of the parameter estimates and statistics for testing the heterogeneity of model parameters are investigated. The finite sample performance of the proposed methodology is demonstrated by simulation studies. The application to cocaine use study provides insights into the prevention of cocaine use. © 2016, The International Biometric Society.

  3. A variable resolution nonhydrostatic global atmospheric semi-implicit semi-Lagrangian model

    Science.gov (United States)

    Pouliot, George Antoine

    2000-10-01

    The objective of this project is to develop a variable-resolution finite difference adiabatic global nonhydrostatic semi-implicit semi-Lagrangian (SISL) model based on the fully compressible nonhydrostatic atmospheric equations. To achieve this goal, a three-dimensional variable resolution dynamical core was developed and tested. The main characteristics of the dynamical core can be summarized as follows: Spherical coordinates were used in a global domain. A hydrostatic/nonhydrostatic switch was incorporated into the dynamical equations to use the fully compressible atmospheric equations. A generalized horizontal variable resolution grid was developed and incorporated into the model. For a variable resolution grid, in contrast to a uniform resolution grid, the order of accuracy of finite difference approximations is formally lost but remains close to the order of accuracy associated with the uniform resolution grid provided the grid stretching is not too significant. The SISL numerical scheme was implemented for the fully compressible set of equations. In addition, the generalized minimum residual (GMRES) method with restart and preconditioner was used to solve the three-dimensional elliptic equation derived from the discretized system of equations. The three-dimensional momentum equation was integrated in vector-form to incorporate the metric terms in the calculations of the trajectories. Using global re-analysis data for a specific test case, the model was compared to similar SISL models previously developed. Reasonable agreement between the model and the other independently developed models was obtained. The Held-Suarez test for dynamical cores was used for a long integration and the model was successfully integrated for up to 1200 days. Idealized topography was used to test the variable resolution component of the model. Nonhydrostatic effects were simulated at grid spacings of 400 meters with idealized topography and uniform flow. Using a high

  4. Variable Fidelity Aeroelastic Toolkit - Structural Model, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — The proposed innovation is a methodology to incorporate variable fidelity structural models into steady and unsteady aeroelastic and aeroservoelastic analyses in...

  5. The Properties of Model Selection when Retaining Theory Variables

    DEFF Research Database (Denmark)

    Hendry, David F.; Johansen, Søren

    Economic theories are often fitted directly to data to avoid possible model selection biases. We show that embedding a theory model that specifies the correct set of m relevant exogenous variables, x{t}, within the larger set of m+k candidate variables, (x{t},w{t}), then selection over the second...... set by their statistical significance can be undertaken without affecting the estimator distribution of the theory parameters. This strategy returns the theory-parameter estimates when the theory is correct, yet protects against the theory being under-specified because some w{t} are relevant....

  6. Practical methods to define scattering coefficients in a room acoustics computer model

    DEFF Research Database (Denmark)

    Zeng, Xiangyang; Christensen, Claus Lynge; Rindel, Jens Holger

    2006-01-01

    of obtaining the data becomes quite time consuming thus increasing the cost of design. In this paper, practical methods to define scattering coefficients, which is based on an approach of modeling surface scattering and scattering caused by limited size of surface as well as edge diffraction are presented...

  7. A new general dynamic model predicting radionuclide concentrations and fluxes in coastal areas from readily accessible driving variables

    International Nuclear Information System (INIS)

    Haakanson, Lars

    2004-01-01

    This paper presents a general, process-based dynamic model for coastal areas for radionuclides (metals, organics and nutrients) from both single pulse fallout and continuous deposition. The model gives radionuclide concentrations in water (total, dissolved and particulate phases and concentrations in sediments and fish) for entire defined coastal areas. The model gives monthly variations. It accounts for inflow from tributaries, direct fallout to the coastal area, internal fluxes (sedimentation, resuspension, diffusion, burial, mixing and biouptake and retention in fish) and fluxes to and from the sea outside the defined coastal area and/or adjacent coastal areas. The fluxes of water and substances between the sea and the coastal area are differentiated into three categories of coast types: (i) areas where the water exchange is regulated by tidal effects; (ii) open coastal areas where the water exchange is regulated by coastal currents; and (iii) semi-enclosed archipelago coasts. The coastal model gives the fluxes to and from the following four abiotic compartments: surface water, deep water, ET areas (i.e., areas where fine sediment erosion and transport processes dominate the bottom dynamic conditions and resuspension appears) and A-areas (i.e., areas of continuous fine sediment accumulation). Criteria to define the boundaries for the given coastal area towards the sea, and to define whether a coastal area is open or closed are given in operational terms. The model is simple to apply since all driving variables may be readily accessed from maps and standard monitoring programs. The driving variables are: latitude, catchment area, mean annual precipitation, fallout and month of fallout and parameters expressing coastal size and form as determined from, e.g., digitized bathymetric maps using a GIS program. Selected results: the predictions of radionuclide concentrations in water and fish largely depend on two factors, the concentration in the sea outside the given

  8. A sectionwise defined model for the material description of 100Cr6 in the thixotropic state

    Science.gov (United States)

    Behrens, B.-A.; Chugreev, A.; Hootak, M.

    2018-05-01

    A sectionwise defined material model has been developed for the numerical description of thixoforming processes. It consists of two sections. The first one describes the material behaviour below the solidus temperature and comprises an approach from structure mechanics, whereas the second section model describes the thixotropic behaviour above the solidus temperature based on the Ostwald-de Waele power law. The material model has been implemented in a commercial FE software Simufact Forming by means of user-defined subroutines. Numerical and experimental investigations of special upsetting tests have been designed and carried out with Armco iron-coated specimens. Finally, the model parameters were fitted by reverse engineering.

  9. Statistical modeling methods to analyze the impacts of multiunit process variability on critical quality attributes of Chinese herbal medicine tablets.

    Science.gov (United States)

    Sun, Fei; Xu, Bing; Zhang, Yi; Dai, Shengyun; Yang, Chan; Cui, Xianglong; Shi, Xinyuan; Qiao, Yanjiang

    2016-01-01

    The quality of Chinese herbal medicine tablets suffers from batch-to-batch variability due to a lack of manufacturing process understanding. In this paper, the Panax notoginseng saponins (PNS) immediate release tablet was taken as the research subject. By defining the dissolution of five active pharmaceutical ingredients and the tablet tensile strength as critical quality attributes (CQAs), influences of both the manipulated process parameters introduced by an orthogonal experiment design and the intermediate granules' properties on the CQAs were fully investigated by different chemometric methods, such as the partial least squares, the orthogonal projection to latent structures, and the multiblock partial least squares (MBPLS). By analyzing the loadings plots and variable importance in the projection indexes, the granule particle sizes and the minimal punch tip separation distance in tableting were identified as critical process parameters. Additionally, the MBPLS model suggested that the lubrication time in the final blending was also important in predicting tablet quality attributes. From the calculated block importance in the projection indexes, the tableting unit was confirmed to be the critical process unit of the manufacturing line. The results demonstrated that the combinatorial use of different multivariate modeling methods could help in understanding the complex process relationships as a whole. The output of this study can then be used to define a control strategy to improve the quality of the PNS immediate release tablet.

  10. Statistical modeling methods to analyze the impacts of multiunit process variability on critical quality attributes of Chinese herbal medicine tablets

    Science.gov (United States)

    Sun, Fei; Xu, Bing; Zhang, Yi; Dai, Shengyun; Yang, Chan; Cui, Xianglong; Shi, Xinyuan; Qiao, Yanjiang

    2016-01-01

    The quality of Chinese herbal medicine tablets suffers from batch-to-batch variability due to a lack of manufacturing process understanding. In this paper, the Panax notoginseng saponins (PNS) immediate release tablet was taken as the research subject. By defining the dissolution of five active pharmaceutical ingredients and the tablet tensile strength as critical quality attributes (CQAs), influences of both the manipulated process parameters introduced by an orthogonal experiment design and the intermediate granules’ properties on the CQAs were fully investigated by different chemometric methods, such as the partial least squares, the orthogonal projection to latent structures, and the multiblock partial least squares (MBPLS). By analyzing the loadings plots and variable importance in the projection indexes, the granule particle sizes and the minimal punch tip separation distance in tableting were identified as critical process parameters. Additionally, the MBPLS model suggested that the lubrication time in the final blending was also important in predicting tablet quality attributes. From the calculated block importance in the projection indexes, the tableting unit was confirmed to be the critical process unit of the manufacturing line. The results demonstrated that the combinatorial use of different multivariate modeling methods could help in understanding the complex process relationships as a whole. The output of this study can then be used to define a control strategy to improve the quality of the PNS immediate release tablet. PMID:27932865

  11. Modelling cointegration in the vector autoregressive model

    DEFF Research Database (Denmark)

    Johansen, Søren

    2000-01-01

    A survey is given of some results obtained for the cointegrated VAR. The Granger representation theorem is discussed and the notions of cointegration and common trends are defined. The statistical model for cointegrated I(1) variables is defined, and it is shown how hypotheses on the cointegratin...

  12. Approaches for modeling within subject variability in pharmacometric count data analysis: dynamic inter-occasion variability and stochastic differential equations.

    Science.gov (United States)

    Deng, Chenhui; Plan, Elodie L; Karlsson, Mats O

    2016-06-01

    Parameter variation in pharmacometric analysis studies can be characterized as within subject parameter variability (WSV) in pharmacometric models. WSV has previously been successfully modeled using inter-occasion variability (IOV), but also stochastic differential equations (SDEs). In this study, two approaches, dynamic inter-occasion variability (dIOV) and adapted stochastic differential equations, were proposed to investigate WSV in pharmacometric count data analysis. These approaches were applied to published count models for seizure counts and Likert pain scores. Both approaches improved the model fits significantly. In addition, stochastic simulation and estimation were used to explore further the capability of the two approaches to diagnose and improve models where existing WSV is not recognized. The results of simulations confirmed the gain in introducing WSV as dIOV and SDEs when parameters vary randomly over time. Further, the approaches were also informative as diagnostics of model misspecification, when parameters changed systematically over time but this was not recognized in the structural model. The proposed approaches in this study offer strategies to characterize WSV and are not restricted to count data.

  13. Influences of variables on ship collision probability in a Bayesian belief network model

    International Nuclear Information System (INIS)

    Hänninen, Maria; Kujala, Pentti

    2012-01-01

    The influences of the variables in a Bayesian belief network model for estimating the role of human factors on ship collision probability in the Gulf of Finland are studied for discovering the variables with the largest influences and for examining the validity of the network. The change in the so-called causation probability is examined while observing each state of the network variables and by utilizing sensitivity and mutual information analyses. Changing course in an encounter situation is the most influential variable in the model, followed by variables such as the Officer of the Watch's action, situation assessment, danger detection, personal condition and incapacitation. The least influential variables are the other distractions on bridge, the bridge view, maintenance routines and the officer's fatigue. In general, the methods are found to agree on the order of the model variables although some disagreements arise due to slightly dissimilar approaches to the concept of variable influence. The relative values and the ranking of variables based on the values are discovered to be more valuable than the actual numerical values themselves. Although the most influential variables seem to be plausible, there are some discrepancies between the indicated influences in the model and literature. Thus, improvements are suggested to the network.

  14. Uncertainty and variability in computational and mathematical models of cardiac physiology.

    Science.gov (United States)

    Mirams, Gary R; Pathmanathan, Pras; Gray, Richard A; Challenor, Peter; Clayton, Richard H

    2016-12-01

    Mathematical and computational models of cardiac physiology have been an integral component of cardiac electrophysiology since its inception, and are collectively known as the Cardiac Physiome. We identify and classify the numerous sources of variability and uncertainty in model formulation, parameters and other inputs that arise from both natural variation in experimental data and lack of knowledge. The impact of uncertainty on the outputs of Cardiac Physiome models is not well understood, and this limits their utility as clinical tools. We argue that incorporating variability and uncertainty should be a high priority for the future of the Cardiac Physiome. We suggest investigating the adoption of approaches developed in other areas of science and engineering while recognising unique challenges for the Cardiac Physiome; it is likely that novel methods will be necessary that require engagement with the mathematics and statistics community. The Cardiac Physiome effort is one of the most mature and successful applications of mathematical and computational modelling for describing and advancing the understanding of physiology. After five decades of development, physiological cardiac models are poised to realise the promise of translational research via clinical applications such as drug development and patient-specific approaches as well as ablation, cardiac resynchronisation and contractility modulation therapies. For models to be included as a vital component of the decision process in safety-critical applications, rigorous assessment of model credibility will be required. This White Paper describes one aspect of this process by identifying and classifying sources of variability and uncertainty in models as well as their implications for the application and development of cardiac models. We stress the need to understand and quantify the sources of variability and uncertainty in model inputs, and the impact of model structure and complexity and their consequences for

  15. Proliferation Risk Characterization Model Prototype Model - User and Programmer Guidelines

    Energy Technology Data Exchange (ETDEWEB)

    Dukelow, J.S.; Whitford, D.

    1998-12-01

    A model for the estimation of the risk of diversion of weapons-capable materials was developed. It represents both the threat of diversion and site vulnerability as a product of a small number of variables (two to eight), each of which can take on a small number (two to four) of qualitatively defined (but quantitatively implemented) values. The values of the overall threat and vulnerability variables are then converted to threat and vulnerability categories. The threat and vulnerability categories are used to define the likelihood of diversion, also defined categorically. The evaluator supplies an estimate of the consequences of a diversion, defined categorically, but with the categories based on the IAEA Attractiveness levels. Likelihood and Consequences categories are used to define the Risk, also defined categorically. The threat, vulnerability, and consequences input provided by the evaluator contains a representation of his/her uncertainty in each variable assignment which is propagated all the way through to the calculation of the Risk categories. [Appendix G available on diskette only.

  16. Natural climate variability in a coupled model

    International Nuclear Information System (INIS)

    Zebiak, S.E.; Cane, M.A.

    1990-01-01

    Multi-century simulations with a simplified coupled ocean-atmosphere model are described. These simulations reveal an impressive range of variability on decadal and longer time scales, in addition to the dominant interannual el Nino/Southern Oscillation signal that the model originally was designed to simulate. Based on a very large sample of century-long simulations, it is nonetheless possible to identify distinct model parameter sensitivities that are described here in terms of selected indices. Preliminary experiments motivated by general circulation model results for increasing greenhouse gases suggest a definite sensitivity to model global warming. While these results are not definitive, they strongly suggest that coupled air-sea dynamics figure prominently in global change and must be included in models for reliable predictions

  17. Statistical methodology for discrete fracture model - including fracture size, orientation uncertainty together with intensity uncertainty and variability

    International Nuclear Information System (INIS)

    Darcel, C.; Davy, P.; Le Goc, R.; Dreuzy, J.R. de; Bour, O.

    2009-11-01

    the other, addresses the issue of the nature of the transition. We develop a new 'mechanistic' model that could help in modeling why and where this transition can occur. The transition between both regimes would occur for a fracture length of 1-10 m and even at a smaller scale for the few outcrops that follow the self-similar density model. A consequence for the disposal issue is that the model that is likely to apply in the 'blind' scale window between 10-100 m is the self-similar model as it is defined for large-scale lineaments. The self-similar model, as it is measured for some outcrops and most lineament maps, is definitely worth being investigated as a reference for scales above 1-10 m. In the rest of the report, we develop a methodology for incorporating uncertainty and variability into the DFN modeling. Fracturing properties arise from complex processes which produce an intrinsic variability; characterizing this variability as an admissible variation of model parameter or as the division of the site into subdomains with distinct DFN models is a critical point of the modeling effort. Moreover, the DFN model encompasses a part of uncertainty, due to data inherent uncertainties and sampling limits. Both effects must be quantified and incorporated into the DFN site model definition process. In that context, all available borehole data including recording of fracture intercept positions, pole orientation and relative uncertainties are used as the basis for the methodological development and further site model assessment. An elementary dataset contains a set of discrete fracture intercepts from which a parent orientation/density distribution can be computed. The elementary bricks of the site, from which these initial parent density distributions are computed, rely on the former Single Hole Interpretation division of the boreholes into sections whose local boundaries are expected to reflect - locally - geology and fracturing properties main characteristics. From that

  18. Defining pharmacy and its practice: a conceptual model for an international audience.

    Science.gov (United States)

    Scahill, S L; Atif, M; Babar, Z U

    2017-01-01

    There is much fragmentation and little consensus in the use of descriptors for the different disciplines that make up the pharmacy sector. Globalization, reprofessionalization and the influx of other disciplines means there is a requirement for a greater degree of standardization. This has not been well addressed in the pharmacy practice research and education literature. To identify and define the various subdisciplines of the pharmacy sector and integrate them into an internationally relevant conceptual model based on narrative synthesis of the literature. A literature review was undertaken to understand the fragmentation in dialogue surrounding definitions relating to concepts and practices in the context of the pharmacy sector. From a synthesis of this literature, the need for this model was justified. Key assumptions of the model were identified, and an organic process of development took place with the three authors engaging in a process of sense-making to theorize the model. The model is "fit for purpose" across multiple countries and includes two components making up the umbrella term "pharmaceutical practice". The first component is the four conceptual dimensions, which outline the disciplines including social and administrative sciences, community pharmacy, clinical pharmacy and pharmaceutical sciences. The second component of the model describes the "acts of practice": teaching, research and professional advocacy; service and academic enterprise. This model aims to expose issues relating to defining pharmacy and its practice and to create dialogue. No model is perfect, but there are implications for what is posited in the areas of policy, education and practice and future research. The main point is the need for increased clarity, or at least beginning the discussion to increase the clarity of definition and consistency of meaning in-and-across the pharmacy sector locally, nationally and internationally.

  19. AMOC decadal variability in Earth system models: Mechanisms and climate impacts

    Energy Technology Data Exchange (ETDEWEB)

    Fedorov, Alexey [Yale Univ., New Haven, CT (United States)

    2017-09-06

    This is the final report for the project titled "AMOC decadal variability in Earth system models: Mechanisms and climate impacts". The central goal of this one-year research project was to understand the mechanisms of decadal and multi-decadal variability of the Atlantic Meridional Overturning Circulation (AMOC) within a hierarchy of climate models ranging from realistic ocean GCMs to Earth system models. The AMOC is a key element of ocean circulation responsible for oceanic transport of heat from low to high latitudes and controlling, to a large extent, climate variations in the North Atlantic. The questions of the AMOC stability, variability and predictability, directly relevant to the questions of climate predictability, were at the center of the research work.

  20. Modeling temporal and spatial variability of traffic-related air pollution: Hourly land use regression models for black carbon

    Science.gov (United States)

    Dons, Evi; Van Poppel, Martine; Kochan, Bruno; Wets, Geert; Int Panis, Luc

    2013-08-01

    Land use regression (LUR) modeling is a statistical technique used to determine exposure to air pollutants in epidemiological studies. Time-activity diaries can be combined with LUR models, enabling detailed exposure estimation and limiting exposure misclassification, both in shorter and longer time lags. In this study, the traffic related air pollutant black carbon was measured with μ-aethalometers on a 5-min time base at 63 locations in Flanders, Belgium. The measurements show that hourly concentrations vary between different locations, but also over the day. Furthermore the diurnal pattern is different for street and background locations. This suggests that annual LUR models are not sufficient to capture all the variation. Hourly LUR models for black carbon are developed using different strategies: by means of dummy variables, with dynamic dependent variables and/or with dynamic and static independent variables. The LUR model with 48 dummies (weekday hours and weekend hours) performs not as good as the annual model (explained variance of 0.44 compared to 0.77 in the annual model). The dataset with hourly concentrations of black carbon can be used to recalibrate the annual model, resulting in many of the original explaining variables losing their statistical significance, and certain variables having the wrong direction of effect. Building new independent hourly models, with static or dynamic covariates, is proposed as the best solution to solve these issues. R2 values for hourly LUR models are mostly smaller than the R2 of the annual model, ranging from 0.07 to 0.8. Between 6 a.m. and 10 p.m. on weekdays the R2 approximates the annual model R2. Even though models of consecutive hours are developed independently, similar variables turn out to be significant. Using dynamic covariates instead of static covariates, i.e. hourly traffic intensities and hourly population densities, did not significantly improve the models' performance.

  1. Variable selection for mixture and promotion time cure rate models.

    Science.gov (United States)

    Masud, Abdullah; Tu, Wanzhu; Yu, Zhangsheng

    2016-11-16

    Failure-time data with cured patients are common in clinical studies. Data from these studies are typically analyzed with cure rate models. Variable selection methods have not been well developed for cure rate models. In this research, we propose two least absolute shrinkage and selection operators based methods, for variable selection in mixture and promotion time cure models with parametric or nonparametric baseline hazards. We conduct an extensive simulation study to assess the operating characteristics of the proposed methods. We illustrate the use of the methods using data from a study of childhood wheezing. © The Author(s) 2016.

  2. Regionalizing Africa: Patterns of Precipitation Variability in Observations and Global Climate Models

    Science.gov (United States)

    Badr, Hamada S.; Dezfuli, Amin K.; Zaitchik, Benjamin F.; Peters-Lidard, Christa D.

    2016-01-01

    Many studies have documented dramatic climatic and environmental changes that have affected Africa over different time scales. These studies often raise questions regarding the spatial extent and regional connectivity of changes inferred from observations and proxies and/or derived from climate models. Objective regionalization offers a tool for addressing these questions. To demonstrate this potential, applications of hierarchical climate regionalizations of Africa using observations and GCM historical simulations and future projections are presented. First, Africa is regionalized based on interannual precipitation variability using Climate Hazards Group Infrared Precipitation with Stations (CHIRPS) data for the period 19812014. A number of data processing techniques and clustering algorithms are tested to ensure a robust definition of climate regions. These regionalization results highlight the seasonal and even month-to-month specificity of regional climate associations across the continent, emphasizing the need to consider time of year as well as research question when defining a coherent region for climate analysis. CHIRPS regions are then compared to those of five GCMs for the historic period, with a focus on boreal summer. Results show that some GCMs capture the climatic coherence of the Sahel and associated teleconnections in a manner that is similar to observations, while other models break the Sahel into uncorrelated subregions or produce a Sahel-like region of variability that is spatially displaced from observations. Finally, shifts in climate regions under projected twenty-first-century climate change for different GCMs and emissions pathways are examined. A projected change is found in the coherence of the Sahel, in which the western and eastern Sahel become distinct regions with different teleconnections. This pattern is most pronounced in high-emissions scenarios.

  3. Application of soft computing based hybrid models in hydrological variables modeling: a comprehensive review

    Science.gov (United States)

    Fahimi, Farzad; Yaseen, Zaher Mundher; El-shafie, Ahmed

    2017-05-01

    Since the middle of the twentieth century, artificial intelligence (AI) models have been used widely in engineering and science problems. Water resource variable modeling and prediction are the most challenging issues in water engineering. Artificial neural network (ANN) is a common approach used to tackle this problem by using viable and efficient models. Numerous ANN models have been successfully developed to achieve more accurate results. In the current review, different ANN models in water resource applications and hydrological variable predictions are reviewed and outlined. In addition, recent hybrid models and their structures, input preprocessing, and optimization techniques are discussed and the results are compared with similar previous studies. Moreover, to achieve a comprehensive view of the literature, many articles that applied ANN models together with other techniques are included. Consequently, coupling procedure, model evaluation, and performance comparison of hybrid models with conventional ANN models are assessed, as well as, taxonomy and hybrid ANN models structures. Finally, current challenges and recommendations for future researches are indicated and new hybrid approaches are proposed.

  4. Model Predictive Control of a Nonlinear System with Known Scheduling Variable

    DEFF Research Database (Denmark)

    Mirzaei, Mahmood; Poulsen, Niels Kjølstad; Niemann, Hans Henrik

    2012-01-01

    Model predictive control (MPC) of a class of nonlinear systems is considered in this paper. We will use Linear Parameter Varying (LPV) model of the nonlinear system. By taking the advantage of having future values of the scheduling variable, we will simplify state prediction. Consequently...... the control problem of the nonlinear system is simplied into a quadratic programming. Wind turbine is chosen as the case study and we choose wind speed as the scheduling variable. Wind speed is measurable ahead of the turbine, therefore the scheduling variable is known for the entire prediction horizon....

  5. A geometric model for magnetizable bodies with internal variables

    Directory of Open Access Journals (Sweden)

    Restuccia, L

    2005-11-01

    Full Text Available In a geometrical framework for thermo-elasticity of continua with internal variables we consider a model of magnetizable media previously discussed and investigated by Maugin. We assume as state variables the magnetization together with its space gradient, subjected to evolution equations depending on both internal and external magnetic fields. We calculate the entropy function and necessary conditions for its existence.

  6. Proliferation Risk Characterization Model Prototype Model - User and Programmer Guidelines; TOPICAL

    International Nuclear Information System (INIS)

    Dukelow, J.S.; Whitford, D.

    1998-01-01

    A model for the estimation of the risk of diversion of weapons-capable materials was developed. It represents both the threat of diversion and site vulnerability as a product of a small number of variables (two to eight), each of which can take on a small number (two to four) of qualitatively defined (but quantitatively implemented) values. The values of the overall threat and vulnerability variables are then converted to threat and vulnerability categories. The threat and vulnerability categories are used to define the likelihood of diversion, also defined categorically. The evaluator supplies an estimate of the consequences of a diversion, defined categorically, but with the categories based on the IAEA Attractiveness levels. Likelihood and Consequences categories are used to define the Risk, also defined categorically. The threat, vulnerability, and consequences input provided by the evaluator contains a representation of his/her uncertainty in each variable assignment which is propagated all the way through to the calculation of the Risk categories.[Appendix G available on diskette only.

  7. Examples of EOS Variables as compared to the UMM-Var Data Model

    Science.gov (United States)

    Cantrell, Simon; Lynnes, Chris

    2016-01-01

    In effort to provide EOSDIS clients a way to discover and use variable data from different providers, a Unified Metadata Model for Variables is being created. This presentation gives an overview of the model and use cases we are handling.

  8. Speech-discrimination scores modeled as a binomial variable.

    Science.gov (United States)

    Thornton, A R; Raffin, M J

    1978-09-01

    Many studies have reported variability data for tests of speech discrimination, and the disparate results of these studies have not been given a simple explanation. Arguments over the relative merits of 25- vs 50-word tests have ignored the basic mathematical properties inherent in the use of percentage scores. The present study models performance on clinical tests of speech discrimination as a binomial variable. A binomial model was developed, and some of its characteristics were tested against data from 4120 scores obtained on the CID Auditory Test W-22. A table for determining significant deviations between scores was generated and compared to observed differences in half-list scores for the W-22 tests. Good agreement was found between predicted and observed values. Implications of the binomial characteristics of speech-discrimination scores are discussed.

  9. Optimal variable-grid finite-difference modeling for porous media

    International Nuclear Information System (INIS)

    Liu, Xinxin; Yin, Xingyao; Li, Haishan

    2014-01-01

    Numerical modeling of poroelastic waves by the finite-difference (FD) method is more expensive than that of acoustic or elastic waves. To improve the accuracy and computational efficiency of seismic modeling, variable-grid FD methods have been developed. In this paper, we derived optimal staggered-grid finite difference schemes with variable grid-spacing and time-step for seismic modeling in porous media. FD operators with small grid-spacing and time-step are adopted for low-velocity or small-scale geological bodies, while FD operators with big grid-spacing and time-step are adopted for high-velocity or large-scale regions. The dispersion relations of FD schemes were derived based on the plane wave theory, then the FD coefficients were obtained using the Taylor expansion. Dispersion analysis and modeling results demonstrated that the proposed method has higher accuracy with lower computational cost for poroelastic wave simulation in heterogeneous reservoirs. (paper)

  10. Comparison of climate envelope models developed using expert-selected variables versus statistical selection

    Science.gov (United States)

    Brandt, Laura A.; Benscoter, Allison; Harvey, Rebecca G.; Speroterra, Carolina; Bucklin, David N.; Romañach, Stephanie; Watling, James I.; Mazzotti, Frank J.

    2017-01-01

    Climate envelope models are widely used to describe potential future distribution of species under different climate change scenarios. It is broadly recognized that there are both strengths and limitations to using climate envelope models and that outcomes are sensitive to initial assumptions, inputs, and modeling methods Selection of predictor variables, a central step in modeling, is one of the areas where different techniques can yield varying results. Selection of climate variables to use as predictors is often done using statistical approaches that develop correlations between occurrences and climate data. These approaches have received criticism in that they rely on the statistical properties of the data rather than directly incorporating biological information about species responses to temperature and precipitation. We evaluated and compared models and prediction maps for 15 threatened or endangered species in Florida based on two variable selection techniques: expert opinion and a statistical method. We compared model performance between these two approaches for contemporary predictions, and the spatial correlation, spatial overlap and area predicted for contemporary and future climate predictions. In general, experts identified more variables as being important than the statistical method and there was low overlap in the variable sets (0.9 for area under the curve (AUC) and >0.7 for true skill statistic (TSS). Spatial overlap, which compares the spatial configuration between maps constructed using the different variable selection techniques, was only moderate overall (about 60%), with a great deal of variability across species. Difference in spatial overlap was even greater under future climate projections, indicating additional divergence of model outputs from different variable selection techniques. Our work is in agreement with other studies which have found that for broad-scale species distribution modeling, using statistical methods of variable

  11. Parameter estimation of variable-parameter nonlinear Muskingum model using excel solver

    Science.gov (United States)

    Kang, Ling; Zhou, Liwei

    2018-02-01

    Abstract . The Muskingum model is an effective flood routing technology in hydrology and water resources Engineering. With the development of optimization technology, more and more variable-parameter Muskingum models were presented to improve effectiveness of the Muskingum model in recent decades. A variable-parameter nonlinear Muskingum model (NVPNLMM) was proposed in this paper. According to the results of two real and frequently-used case studies by various models, the NVPNLMM could obtain better values of evaluation criteria, which are used to describe the superiority of the estimated outflows and compare the accuracies of flood routing using various models, and the optimal estimated outflows by the NVPNLMM were closer to the observed outflows than the ones by other models.

  12. BehavePlus fire modeling system, version 5.0: Variables

    Science.gov (United States)

    Patricia L. Andrews

    2009-01-01

    This publication has been revised to reflect updates to version 4.0 of the BehavePlus software. It was originally published as the BehavePlus fire modeling system, version 4.0: Variables in July, 2008.The BehavePlus fire modeling system is a computer program based on mathematical models that describe wildland fire behavior and effects and the...

  13. Partitioning the impacts of spatial and climatological rainfall variability in urban drainage modeling

    Science.gov (United States)

    Peleg, Nadav; Blumensaat, Frank; Molnar, Peter; Fatichi, Simone; Burlando, Paolo

    2017-03-01

    The performance of urban drainage systems is typically examined using hydrological and hydrodynamic models where rainfall input is uniformly distributed, i.e., derived from a single or very few rain gauges. When models are fed with a single uniformly distributed rainfall realization, the response of the urban drainage system to the rainfall variability remains unexplored. The goal of this study was to understand how climate variability and spatial rainfall variability, jointly or individually considered, affect the response of a calibrated hydrodynamic urban drainage model. A stochastic spatially distributed rainfall generator (STREAP - Space-Time Realizations of Areal Precipitation) was used to simulate many realizations of rainfall for a 30-year period, accounting for both climate variability and spatial rainfall variability. The generated rainfall ensemble was used as input into a calibrated hydrodynamic model (EPA SWMM - the US EPA's Storm Water Management Model) to simulate surface runoff and channel flow in a small urban catchment in the city of Lucerne, Switzerland. The variability of peak flows in response to rainfall of different return periods was evaluated at three different locations in the urban drainage network and partitioned among its sources. The main contribution to the total flow variability was found to originate from the natural climate variability (on average over 74 %). In addition, the relative contribution of the spatial rainfall variability to the total flow variability was found to increase with longer return periods. This suggests that while the use of spatially distributed rainfall data can supply valuable information for sewer network design (typically based on rainfall with return periods from 5 to 15 years), there is a more pronounced relevance when conducting flood risk assessments for larger return periods. The results show the importance of using multiple distributed rainfall realizations in urban hydrology studies to capture the

  14. Modeling Turbulent Combustion for Variable Prandtl and Schmidt Number

    Science.gov (United States)

    Hassan, H. A.

    2004-01-01

    This report consists of two abstracts submitted for possible presentation at the AIAA Aerospace Science Meeting to be held in January 2005. Since the submittal of these abstracts we are continuing refinement of the model coefficients derived for the case of a variable Turbulent Prandtl number. The test cases being investigated are a Mach 9.2 flow over a degree ramp and a Mach 8.2 3-D calculation of crossing shocks. We have developed an axisymmetric code for treating axisymmetric flows. In addition the variable Schmidt number formulation was incorporated in the code and we are in the process of determining the model constants.

  15. A model to compare a defined benefit pension fund with a defined contribution provident fund

    Directory of Open Access Journals (Sweden)

    J.M. Nevin

    2003-12-01

    Full Text Available During 1994 universities and certain other institutions were given the option of setting up private retirement funds as an alternative to the AIPF. Because of the underfundedness of the AIPF only a substantially reduced Actuarial Reserve Value could be transferred to the new fund on behalf of each member. Employees at these institutions had to make the difficult decision of whether to remain a member of the AIPF or to join a new fund. Several institutions created defined contribution funds as an alternative to the AIPF. In such funds the member carries the investment risk and most institutions felt the need to provide some form of top-up of the Transfer Value. A simple mathematical model is formulated to aid in the comparison of expected retirement benefits under the AIPF and a private fund and to investigate the management problem of distributing additional top-up funds in a fair manner amongst the various age groups within the fund.

  16. Impacts of correcting the inter-variable correlation of climate model outputs on hydrological modeling

    Science.gov (United States)

    Chen, Jie; Li, Chao; Brissette, François P.; Chen, Hua; Wang, Mingna; Essou, Gilles R. C.

    2018-05-01

    Bias correction is usually implemented prior to using climate model outputs for impact studies. However, bias correction methods that are commonly used treat climate variables independently and often ignore inter-variable dependencies. The effects of ignoring such dependencies on impact studies need to be investigated. This study aims to assess the impacts of correcting the inter-variable correlation of climate model outputs on hydrological modeling. To this end, a joint bias correction (JBC) method which corrects the joint distribution of two variables as a whole is compared with an independent bias correction (IBC) method; this is considered in terms of correcting simulations of precipitation and temperature from 26 climate models for hydrological modeling over 12 watersheds located in various climate regimes. The results show that the simulated precipitation and temperature are considerably biased not only in the individual distributions, but also in their correlations, which in turn result in biased hydrological simulations. In addition to reducing the biases of the individual characteristics of precipitation and temperature, the JBC method can also reduce the bias in precipitation-temperature (P-T) correlations. In terms of hydrological modeling, the JBC method performs significantly better than the IBC method for 11 out of the 12 watersheds over the calibration period. For the validation period, the advantages of the JBC method are greatly reduced as the performance becomes dependent on the watershed, GCM and hydrological metric considered. For arid/tropical and snowfall-rainfall-mixed watersheds, JBC performs better than IBC. For snowfall- or rainfall-dominated watersheds, however, the two methods behave similarly, with IBC performing somewhat better than JBC. Overall, the results emphasize the advantages of correcting the P-T correlation when using climate model-simulated precipitation and temperature to assess the impact of climate change on watershed

  17. A Time-Series Water Level Forecasting Model Based on Imputation and Variable Selection Method.

    Science.gov (United States)

    Yang, Jun-He; Cheng, Ching-Hsue; Chan, Chia-Pan

    2017-01-01

    Reservoirs are important for households and impact the national economy. This paper proposed a time-series forecasting model based on estimating a missing value followed by variable selection to forecast the reservoir's water level. This study collected data from the Taiwan Shimen Reservoir as well as daily atmospheric data from 2008 to 2015. The two datasets are concatenated into an integrated dataset based on ordering of the data as a research dataset. The proposed time-series forecasting model summarily has three foci. First, this study uses five imputation methods to directly delete the missing value. Second, we identified the key variable via factor analysis and then deleted the unimportant variables sequentially via the variable selection method. Finally, the proposed model uses a Random Forest to build the forecasting model of the reservoir's water level. This was done to compare with the listing method under the forecasting error. These experimental results indicate that the Random Forest forecasting model when applied to variable selection with full variables has better forecasting performance than the listing model. In addition, this experiment shows that the proposed variable selection can help determine five forecast methods used here to improve the forecasting capability.

  18. A Model for Positively Correlated Count Variables

    DEFF Research Database (Denmark)

    Møller, Jesper; Rubak, Ege Holger

    2010-01-01

    An α-permanental random field is briefly speaking a model for a collection of non-negative integer valued random variables with positive associations. Though such models possess many appealing probabilistic properties, many statisticians seem unaware of α-permanental random fields...... and their potential applications. The purpose of this paper is to summarize useful probabilistic results, study stochastic constructions and simulation techniques, and discuss some examples of α-permanental random fields. This should provide a useful basis for discussing the statistical aspects in future work....

  19. Interacting ghost dark energy models with variable G and Λ

    Science.gov (United States)

    Sadeghi, J.; Khurshudyan, M.; Movsisyan, A.; Farahani, H.

    2013-12-01

    In this paper we consider several phenomenological models of variable Λ. Model of a flat Universe with variable Λ and G is accepted. It is well known, that varying G and Λ gives rise to modified field equations and modified conservation laws, which gives rise to many different manipulations and assumptions in literature. We will consider two component fluid, which parameters will enter to Λ. Interaction between fluids with energy densities ρ1 and ρ2 assumed as Q = 3Hb(ρ1+ρ2). We have numerical analyze of important cosmological parameters like EoS parameter of the composed fluid and deceleration parameter q of the model.

  20. Representing general theoretical concepts in structural equation models: The role of composite variables

    Science.gov (United States)

    Grace, J.B.; Bollen, K.A.

    2008-01-01

    Structural equation modeling (SEM) holds the promise of providing natural scientists the capacity to evaluate complex multivariate hypotheses about ecological systems. Building on its predecessors, path analysis and factor analysis, SEM allows for the incorporation of both observed and unobserved (latent) variables into theoretically-based probabilistic models. In this paper we discuss the interface between theory and data in SEM and the use of an additional variable type, the composite. In simple terms, composite variables specify the influences of collections of other variables and can be helpful in modeling heterogeneous concepts of the sort commonly of interest to ecologists. While long recognized as a potentially important element of SEM, composite variables have received very limited use, in part because of a lack of theoretical consideration, but also because of difficulties that arise in parameter estimation when using conventional solution procedures. In this paper we present a framework for discussing composites and demonstrate how the use of partially-reduced-form models can help to overcome some of the parameter estimation and evaluation problems associated with models containing composites. Diagnostic procedures for evaluating the most appropriate and effective use of composites are illustrated with an example from the ecological literature. It is argued that an ability to incorporate composite variables into structural equation models may be particularly valuable in the study of natural systems, where concepts are frequently multifaceted and the influence of suites of variables are often of interest. ?? Springer Science+Business Media, LLC 2007.

  1. Inter-model variability and biases of the global water cycle in CMIP3 coupled climate models

    International Nuclear Information System (INIS)

    Liepert, Beate G; Previdi, Michael

    2012-01-01

    Observed changes such as increasing global temperatures and the intensification of the global water cycle in the 20th century are robust results of coupled general circulation models (CGCMs). In spite of these successes, model-to-model variability and biases that are small in first order climate responses, however, have considerable implications for climate predictability especially when multi-model means are used. We show that most climate simulations of the 20th and 21st century A2 scenario performed with CMIP3 (Coupled Model Inter-comparison Project Phase 3) models have deficiencies in simulating the global atmospheric moisture balance. Large biases of only a few models (some biases reach the simulated global precipitation changes in the 20th and 21st centuries) affect the multi-model mean global moisture budget. An imbalanced flux of −0.14 Sv exists while the multi-model median imbalance is only −0.02 Sv. Moreover, for most models the detected imbalance changes over time. As a consequence, in 13 of the 18 CMIP3 models examined, global annual mean precipitation exceeds global evaporation, indicating that there should be a ‘leaking’ of moisture from the atmosphere whereas for the remaining five models a ‘flooding’ is implied. Nonetheless, in all models, the actual atmospheric moisture content and its variability correctly increases during the course of the 20th and 21st centuries. These discrepancies therefore imply an unphysical and hence ‘ghost’ sink/source of atmospheric moisture in the models whose atmospheres flood/leak. The ghost source/sink of moisture can also be regarded as atmospheric latent heating/cooling and hence as positive/negative perturbation of the atmospheric energy budget or non-radiative forcing in the range of −1 to +6 W m −2 (median +0.1 W m −2 ). The inter-model variability of the global atmospheric moisture transport from oceans to land areas, which impacts the terrestrial water cycle, is also quite high and ranges

  2. How ocean lateral mixing changes Southern Ocean variability in coupled climate models

    Science.gov (United States)

    Pradal, M. A. S.; Gnanadesikan, A.; Thomas, J. L.

    2016-02-01

    The lateral mixing of tracers represents a major uncertainty in the formulation of coupled climate models. The mixing of tracers along density surfaces in the interior and horizontally within the mixed layer is often parameterized using a mixing coefficient ARedi. The models used in the Coupled Model Intercomparison Project 5 exhibit more than an order of magnitude range in the values of this coefficient used within the Southern Ocean. The impacts of such uncertainty on Southern Ocean variability have remained unclear, even as recent work has shown that this variability differs between different models. In this poster, we change the lateral mixing coefficient within GFDL ESM2Mc, a coarse-resolution Earth System model that nonetheless has a reasonable circulation within the Southern Ocean. As the coefficient varies from 400 to 2400 m2/s the amplitude of the variability varies significantly. The low-mixing case shows strong decadal variability with an annual mean RMS temperature variability exceeding 1C in the Circumpolar Current. The highest-mixing case shows a very similar spatial pattern of variability, but with amplitudes only about 60% as large. The suppression of mixing is larger in the Atlantic Sector of the Southern Ocean relatively to the Pacific sector. We examine the salinity budgets of convective regions, paying particular attention to the extent to which high mixing prevents the buildup of low-saline waters that are capable of shutting off deep convection entirely.

  3. Statistical methodology for discrete fracture model - including fracture size, orientation uncertainty together with intensity uncertainty and variability

    Energy Technology Data Exchange (ETDEWEB)

    Darcel, C. (Itasca Consultants SAS (France)); Davy, P.; Le Goc, R.; Dreuzy, J.R. de; Bour, O. (Geosciences Rennes, UMR 6118 CNRS, Univ. def Rennes, Rennes (France))

    2009-11-15

    the lineament scale (k{sub t} = 2) on the other, addresses the issue of the nature of the transition. We develop a new 'mechanistic' model that could help in modeling why and where this transition can occur. The transition between both regimes would occur for a fracture length of 1-10 m and even at a smaller scale for the few outcrops that follow the self-similar density model. A consequence for the disposal issue is that the model that is likely to apply in the 'blind' scale window between 10-100 m is the self-similar model as it is defined for large-scale lineaments. The self-similar model, as it is measured for some outcrops and most lineament maps, is definitely worth being investigated as a reference for scales above 1-10 m. In the rest of the report, we develop a methodology for incorporating uncertainty and variability into the DFN modeling. Fracturing properties arise from complex processes which produce an intrinsic variability; characterizing this variability as an admissible variation of model parameter or as the division of the site into subdomains with distinct DFN models is a critical point of the modeling effort. Moreover, the DFN model encompasses a part of uncertainty, due to data inherent uncertainties and sampling limits. Both effects must be quantified and incorporated into the DFN site model definition process. In that context, all available borehole data including recording of fracture intercept positions, pole orientation and relative uncertainties are used as the basis for the methodological development and further site model assessment. An elementary dataset contains a set of discrete fracture intercepts from which a parent orientation/density distribution can be computed. The elementary bricks of the site, from which these initial parent density distributions are computed, rely on the former Single Hole Interpretation division of the boreholes into sections whose local boundaries are expected to reflect - locally - geology

  4. A Composite Likelihood Inference in Latent Variable Models for Ordinal Longitudinal Responses

    Science.gov (United States)

    Vasdekis, Vassilis G. S.; Cagnone, Silvia; Moustaki, Irini

    2012-01-01

    The paper proposes a composite likelihood estimation approach that uses bivariate instead of multivariate marginal probabilities for ordinal longitudinal responses using a latent variable model. The model considers time-dependent latent variables and item-specific random effects to be accountable for the interdependencies of the multivariate…

  5. Input variable selection for data-driven models of Coriolis flowmeters for two-phase flow measurement

    International Nuclear Information System (INIS)

    Wang, Lijuan; Yan, Yong; Wang, Xue; Wang, Tao

    2017-01-01

    Input variable selection is an essential step in the development of data-driven models for environmental, biological and industrial applications. Through input variable selection to eliminate the irrelevant or redundant variables, a suitable subset of variables is identified as the input of a model. Meanwhile, through input variable selection the complexity of the model structure is simplified and the computational efficiency is improved. This paper describes the procedures of the input variable selection for the data-driven models for the measurement of liquid mass flowrate and gas volume fraction under two-phase flow conditions using Coriolis flowmeters. Three advanced input variable selection methods, including partial mutual information (PMI), genetic algorithm-artificial neural network (GA-ANN) and tree-based iterative input selection (IIS) are applied in this study. Typical data-driven models incorporating support vector machine (SVM) are established individually based on the input candidates resulting from the selection methods. The validity of the selection outcomes is assessed through an output performance comparison of the SVM based data-driven models and sensitivity analysis. The validation and analysis results suggest that the input variables selected from the PMI algorithm provide more effective information for the models to measure liquid mass flowrate while the IIS algorithm provides a fewer but more effective variables for the models to predict gas volume fraction. (paper)

  6. Ensembling Variable Selectors by Stability Selection for the Cox Model

    Directory of Open Access Journals (Sweden)

    Qing-Yan Yin

    2017-01-01

    Full Text Available As a pivotal tool to build interpretive models, variable selection plays an increasingly important role in high-dimensional data analysis. In recent years, variable selection ensembles (VSEs have gained much interest due to their many advantages. Stability selection (Meinshausen and Bühlmann, 2010, a VSE technique based on subsampling in combination with a base algorithm like lasso, is an effective method to control false discovery rate (FDR and to improve selection accuracy in linear regression models. By adopting lasso as a base learner, we attempt to extend stability selection to handle variable selection problems in a Cox model. According to our experience, it is crucial to set the regularization region Λ in lasso and the parameter λmin properly so that stability selection can work well. To the best of our knowledge, however, there is no literature addressing this problem in an explicit way. Therefore, we first provide a detailed procedure to specify Λ and λmin. Then, some simulated and real-world data with various censoring rates are used to examine how well stability selection performs. It is also compared with several other variable selection approaches. Experimental results demonstrate that it achieves better or competitive performance in comparison with several other popular techniques.

  7. Defining epidemics in computer simulation models: How do definitions influence conclusions?

    Directory of Open Access Journals (Sweden)

    Carolyn Orbann

    2017-06-01

    Full Text Available Computer models have proven to be useful tools in studying epidemic disease in human populations. Such models are being used by a broader base of researchers, and it has become more important to ensure that descriptions of model construction and data analyses are clear and communicate important features of model structure. Papers describing computer models of infectious disease often lack a clear description of how the data are aggregated and whether or not non-epidemic runs are excluded from analyses. Given that there is no concrete quantitative definition of what constitutes an epidemic within the public health literature, each modeler must decide on a strategy for identifying epidemics during simulation runs. Here, an SEIR model was used to test the effects of how varying the cutoff for considering a run an epidemic changes potential interpretations of simulation outcomes. Varying the cutoff from 0% to 15% of the model population ever infected with the illness generated significant differences in numbers of dead and timing variables. These results are important for those who use models to form public health policy, in which questions of timing or implementation of interventions might be answered using findings from computer simulation models.

  8. Multiscale thermohydrologic model: addressing variability and uncertainty at Yucca Mountain

    International Nuclear Information System (INIS)

    Buscheck, T; Rosenberg, N D; Gansemer, J D; Sun, Y

    2000-01-01

    Performance assessment and design evaluation require a modeling tool that simultaneously accounts for processes occurring at a scale of a few tens of centimeters around individual waste packages and emplacement drifts, and also on behavior at the scale of the mountain. Many processes and features must be considered, including non-isothermal, multiphase-flow in rock of variable saturation and thermal radiation in open cavities. Also, given the nature of the fractured rock at Yucca Mountain, a dual-permeability approach is needed to represent permeability. A monolithic numerical model with all these features requires too large a computational cost to be an effective simulation tool, one that is used to examine sensitivity to key model assumptions and parameters. We have developed a multi-scale modeling approach that effectively simulates 3D discrete-heat-source, mountain-scale thermohydrologic behavior at Yucca Mountain and captures the natural variability of the site consistent with what we know from site characterization and waste-package-to-waste-package variability in heat output. We describe this approach and present results examining the role of infiltration flux, the most important natural-system parameter with respect to how thermohydrologic behavior influences the performance of the repository

  9. Quantum theory with an energy operator defined as a quartic form of the momentum

    Energy Technology Data Exchange (ETDEWEB)

    Bezák, Viktor, E-mail: bezak@fmph.uniba.sk

    2016-09-15

    Quantum theory of the non-harmonic oscillator defined by the energy operator proposed by Yurke and Buks (2006) is presented. Although these authors considered a specific problem related to a model of transmission lines in a Kerr medium, our ambition is not to discuss the physical substantiation of their model. Instead, we consider the problem from an abstract, logically deductive, viewpoint. Using the Yurke–Buks energy operator, we focus attention on the imaginary-time propagator. We derive it as a functional of the Mehler kernel and, alternatively, as an exact series involving Hermite polynomials. For a statistical ensemble of identical oscillators defined by the Yurke–Buks energy operator, we calculate the partition function, average energy, free energy and entropy. Using the diagonal element of the canonical density matrix of this ensemble in the coordinate representation, we define a probability density, which appears to be a deformed Gaussian distribution. A peculiarity of this probability density is that it may reveal, when plotted as a function of the position variable, a shape with two peaks located symmetrically with respect to the central point.

  10. Modelling survival

    DEFF Research Database (Denmark)

    Ashauer, Roman; Albert, Carlo; Augustine, Starrlight

    2016-01-01

    The General Unified Threshold model for Survival (GUTS) integrates previously published toxicokinetic-toxicodynamic models and estimates survival with explicitly defined assumptions. Importantly, GUTS accounts for time-variable exposure to the stressor. We performed three studies to test...

  11. A Time-Series Water Level Forecasting Model Based on Imputation and Variable Selection Method

    Directory of Open Access Journals (Sweden)

    Jun-He Yang

    2017-01-01

    Full Text Available Reservoirs are important for households and impact the national economy. This paper proposed a time-series forecasting model based on estimating a missing value followed by variable selection to forecast the reservoir’s water level. This study collected data from the Taiwan Shimen Reservoir as well as daily atmospheric data from 2008 to 2015. The two datasets are concatenated into an integrated dataset based on ordering of the data as a research dataset. The proposed time-series forecasting model summarily has three foci. First, this study uses five imputation methods to directly delete the missing value. Second, we identified the key variable via factor analysis and then deleted the unimportant variables sequentially via the variable selection method. Finally, the proposed model uses a Random Forest to build the forecasting model of the reservoir’s water level. This was done to compare with the listing method under the forecasting error. These experimental results indicate that the Random Forest forecasting model when applied to variable selection with full variables has better forecasting performance than the listing model. In addition, this experiment shows that the proposed variable selection can help determine five forecast methods used here to improve the forecasting capability.

  12. Adaptation of endothelial cells to physiologically-modeled, variable shear stress.

    Directory of Open Access Journals (Sweden)

    Joseph S Uzarski

    Full Text Available Endothelial cell (EC function is mediated by variable hemodynamic shear stress patterns at the vascular wall, where complex shear stress profiles directly correlate with blood flow conditions that vary temporally based on metabolic demand. The interactions of these more complex and variable shear fields with EC have not been represented in hemodynamic flow models. We hypothesized that EC exposed to pulsatile shear stress that changes in magnitude and duration, modeled directly from real-time physiological variations in heart rate, would elicit phenotypic changes as relevant to their critical roles in thrombosis, hemostasis, and inflammation. Here we designed a physiological flow (PF model based on short-term temporal changes in blood flow observed in vivo and compared it to static culture and steady flow (SF at a fixed pulse frequency of 1.3 Hz. Results show significant changes in gene regulation as a function of temporally variable flow, indicating a reduced wound phenotype more representative of quiescence. EC cultured under PF exhibited significantly higher endothelial nitric oxide synthase (eNOS activity (PF: 176.0±11.9 nmol/10(5 EC; SF: 115.0±12.5 nmol/10(5 EC, p = 0.002 and lower TNF-a-induced HL-60 leukocyte adhesion (PF: 37±6 HL-60 cells/mm(2; SF: 111±18 HL-60/mm(2, p = 0.003 than cells cultured under SF which is consistent with a more quiescent anti-inflammatory and anti-thrombotic phenotype. In vitro models have become increasingly adept at mimicking natural physiology and in doing so have clarified the importance of both chemical and physical cues that drive cell function. These data illustrate that the variability in metabolic demand and subsequent changes in perfusion resulting in constantly variable shear stress plays a key role in EC function that has not previously been described.

  13. Oracle Efficient Variable Selection in Random and Fixed Effects Panel Data Models

    DEFF Research Database (Denmark)

    Kock, Anders Bredahl

    This paper generalizes the results for the Bridge estimator of Huang et al. (2008) to linear random and fixed effects panel data models which are allowed to grow in both dimensions. In particular we show that the Bridge estimator is oracle efficient. It can correctly distinguish between relevant...... and irrelevant variables and the asymptotic distribution of the estimators of the coefficients of the relevant variables is the same as if only these had been included in the model, i.e. as if an oracle had revealed the true model prior to estimation. In the case of more explanatory variables than observations......, we prove that the Marginal Bridge estimator can asymptotically correctly distinguish between relevant and irrelevant explanatory variables. We do this without restricting the dependence between covariates and without assuming sub Gaussianity of the error terms thereby generalizing the results...

  14. Convex trace functions of several variables

    DEFF Research Database (Denmark)

    Hansen, Frank

    2002-01-01

    We prove that the function (x1,...,xk)¿Tr(f(x1,...,xk)), defined on k-tuples of symmetric matrices of order (n1,...,nk) in the domain of f, is convex for any convex function f of k variables. The matrix f(x1,...,xk) is defined by the functional calculus for functions of several variables, and it ...

  15. Variable Renewable Energy in Long-Term Planning Models: A Multi-Model Perspective

    Energy Technology Data Exchange (ETDEWEB)

    Cole, Wesley J. [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Frew, Bethany A. [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Mai, Trieu T. [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Sun, Yinong [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Bistline, John [Electric Power Research Inst., Palo Alto, CA (United States); Blanford, Geoffrey [Electric Power Research Inst., Palo Alto, CA (United States); Young, David [Electric Power Research Inst., Palo Alto, CA (United States); Marcy, Cara [Energy Information Administration, Washington, DC (United States); Namovicz, Chris [Energy Information Administration, Washington, DC (United States); Edelman, Risa [Environmental Protection Agency, Washington, DC (United States); Meroney, Bill [Environmental Protection Agency; Sims, Ryan [Environmental Protection Agency; Stenhouse, Jeb [Environmental Protection Agency; Donohoo-Vallett, Paul [U.S. Department of Energy

    2017-11-03

    Long-term capacity expansion models of the U.S. electricity sector have long been used to inform electric sector stakeholders and decision makers. With the recent surge in variable renewable energy (VRE) generators - primarily wind and solar photovoltaics - the need to appropriately represent VRE generators in these long-term models has increased. VRE generators are especially difficult to represent for a variety of reasons, including their variability, uncertainty, and spatial diversity. To assess current best practices, share methods and data, and identify future research needs for VRE representation in capacity expansion models, four capacity expansion modeling teams from the Electric Power Research Institute, the U.S. Energy Information Administration, the U.S. Environmental Protection Agency, and the National Renewable Energy Laboratory conducted two workshops of VRE modeling for national-scale capacity expansion models. The workshops covered a wide range of VRE topics, including transmission and VRE resource data, VRE capacity value, dispatch and operational modeling, distributed generation, and temporal and spatial resolution. The objectives of the workshops were both to better understand these topics and to improve the representation of VRE across the suite of models. Given these goals, each team incorporated model updates and performed additional analyses between the first and second workshops. This report summarizes the analyses and model 'experiments' that were conducted as part of these workshops as well as the various methods for treating VRE among the four modeling teams. The report also reviews the findings and learnings from the two workshops. We emphasize the areas where there is still need for additional research and development on analysis tools to incorporate VRE into long-term planning and decision-making.

  16. Separation of variables in anisotropic models: anisotropic Rabi and elliptic Gaudin model in an external magnetic field

    Science.gov (United States)

    Skrypnyk, T.

    2017-08-01

    We study the problem of separation of variables for classical integrable Hamiltonian systems governed by non-skew-symmetric non-dynamical so(3)\\otimes so(3) -valued elliptic r-matrices with spectral parameters. We consider several examples of such models, and perform separation of variables for classical anisotropic one- and two-spin Gaudin-type models in an external magnetic field, and for Jaynes-Cummings-Dicke-type models without the rotating wave approximation.

  17. Modelling the effects of spatial variability on radionuclide migration

    International Nuclear Information System (INIS)

    1998-01-01

    The NEA workshop reflect the present status in national waste management program, specifically in spatial variability and performance assessment of geologic disposal sites for deed repository system the four sessions were: Spatial Variability: Its Definition and Significance to Performance Assessment and Site Characterisation; Experience with the Modelling of Radionuclide Migration in the Presence of Spatial Variability in Various Geological Environments; New Areas for Investigation: Two Personal Views; What is Wanted and What is Feasible: Views and Future Plans in Selected Waste Management Organisations. The 26 papers presented on the four oral sessions and on the poster session have been abstracted and indexed individually for the INIS database. (R.P.)

  18. AeroPropulsoServoElasticity: Dynamic Modeling of the Variable Cycle Propulsion System

    Science.gov (United States)

    Kopasakis, George

    2012-01-01

    This presentation was made at the 2012 Fundamental Aeronautics Program Technical Conference and it covers research work for the Dynamic Modeling of the Variable cycle Propulsion System that was done under the Supersonics Project, in the area of AeroPropulsoServoElasticity. The presentation covers the objective for the propulsion system dynamic modeling work, followed by the work that has been done so far to model the variable Cycle Engine, modeling of the inlet, the nozzle, the modeling that has been done to model the affects of flow distortion, and finally presenting some concluding remarks and future plans.

  19. Ocean carbon and heat variability in an Earth System Model

    Science.gov (United States)

    Thomas, J. L.; Waugh, D.; Gnanadesikan, A.

    2016-12-01

    Ocean carbon and heat content are very important for regulating global climate. Furthermore, due to lack of observations and dependence on parameterizations, there has been little consensus in the modeling community on the magnitude of realistic ocean carbon and heat content variability, particularly in the Southern Ocean. We assess the differences between global oceanic heat and carbon content variability in GFDL ESM2Mc using a 500-year, pre-industrial control simulation. The global carbon and heat content are directly out of phase with each other; however, in the Southern Ocean the heat and carbon content are in phase. The global heat mutli-decadal variability is primarily explained by variability in the tropics and mid-latitudes, while the variability in global carbon content is primarily explained by Southern Ocean variability. In order to test the robustness of this relationship, we use three additional pre-industrial control simulations using different mesoscale mixing parameterizations. Three pre-industrial control simulations are conducted with the along-isopycnal diffusion coefficient (Aredi) set to constant values of 400, 800 (control) and 2400 m2 s-1. These values for Aredi are within the range of parameter settings commonly used in modeling groups. Finally, one pre-industrial control simulation is conducted where the minimum in the Gent-McWilliams parameterization closure scheme (AGM) increased to 600 m2 s-1. We find that the different simulations have very different multi-decadal variability, especially in the Weddell Sea where the characteristics of deep convection are drastically changed. While the temporal frequency and amplitude global heat and carbon content changes significantly, the overall spatial pattern of variability remains unchanged between the simulations.

  20. Predictive-property-ranked variable reduction in partial least squares modelling with final complexity adapted models: comparison of properties for ranking.

    Science.gov (United States)

    Andries, Jan P M; Vander Heyden, Yvan; Buydens, Lutgarde M C

    2013-01-14

    The calibration performance of partial least squares regression for one response (PLS1) can be improved by eliminating uninformative variables. Many variable-reduction methods are based on so-called predictor-variable properties or predictive properties, which are functions of various PLS-model parameters, and which may change during the steps of the variable-reduction process. Recently, a new predictive-property-ranked variable reduction method with final complexity adapted models, denoted as PPRVR-FCAM or simply FCAM, was introduced. It is a backward variable elimination method applied on the predictive-property-ranked variables. The variable number is first reduced, with constant PLS1 model complexity A, until A variables remain, followed by a further decrease in PLS complexity, allowing the final selection of small numbers of variables. In this study for three data sets the utility and effectiveness of six individual and nine combined predictor-variable properties are investigated, when used in the FCAM method. The individual properties include the absolute value of the PLS1 regression coefficient (REG), the significance of the PLS1 regression coefficient (SIG), the norm of the loading weight (NLW) vector, the variable importance in the projection (VIP), the selectivity ratio (SR), and the squared correlation coefficient of a predictor variable with the response y (COR). The selective and predictive performances of the models resulting from the use of these properties are statistically compared using the one-tailed Wilcoxon signed rank test. The results indicate that the models, resulting from variable reduction with the FCAM method, using individual or combined properties, have similar or better predictive abilities than the full spectrum models. After mean-centring of the data, REG and SIG, provide low numbers of informative variables, with a meaning relevant to the response, and lower than the other individual properties, while the predictive abilities are

  1. Effect of climate variables on cocoa black pod incidence in Sabah using ARIMAX model

    Science.gov (United States)

    Ling Sheng Chang, Albert; Ramba, Haya; Mohd. Jaaffar, Ahmad Kamil; Kim Phin, Chong; Chong Mun, Ho

    2016-06-01

    Cocoa black pod disease is one of the major diseases affecting the cocoa production in Malaysia and also around the world. Studies have shown that the climate variables have influenced the cocoa black pod disease incidence and it is important to quantify the black pod disease variation due to the effect of climate variables. Application of time series analysis especially auto-regressive moving average (ARIMA) model has been widely used in economics study and can be used to quantify the effect of climate variables on black pod incidence to forecast the right time to control the incidence. However, ARIMA model does not capture some turning points in cocoa black pod incidence. In order to improve forecasting performance, other explanatory variables such as climate variables should be included into ARIMA model as ARIMAX model. Therefore, this paper is to study the effect of climate variables on the cocoa black pod disease incidence using ARIMAX model. The findings of the study showed ARIMAX model using MA(1) and relative humidity at lag 7 days, RHt - 7 gave better R square value compared to ARIMA model using MA(1) which could be used to forecast the black pod incidence to assist the farmers determine timely application of fungicide spraying and culture practices to control the black pod incidence.

  2. Incorporating Latent Variables into Discrete Choice Models - A Simultaneous Estimation Approach Using SEM Software

    Directory of Open Access Journals (Sweden)

    Dirk Temme

    2008-12-01

    Full Text Available Integrated choice and latent variable (ICLV models represent a promising new class of models which merge classic choice models with the structural equation approach (SEM for latent variables. Despite their conceptual appeal, applications of ICLV models in marketing remain rare. We extend previous ICLV applications by first estimating a multinomial choice model and, second, by estimating hierarchical relations between latent variables. An empirical study on travel mode choice clearly demonstrates the value of ICLV models to enhance the understanding of choice processes. In addition to the usually studied directly observable variables such as travel time, we show how abstract motivations such as power and hedonism as well as attitudes such as a desire for flexibility impact on travel mode choice. Furthermore, we show that it is possible to estimate such a complex ICLV model with the widely available structural equation modeling package Mplus. This finding is likely to encourage more widespread application of this appealing model class in the marketing field.

  3. a modified intervention model for gross domestic product variable

    African Journals Online (AJOL)

    observations on a variable that have been measured at ... assumption that successive values in the data file ... these interventions, one may try to evaluate the effect of ... generalized series by comparing the distinct periods. A ... the process of checking for adequacy of the model based .... As a result, the model's forecast will.

  4. Assessing the accuracy and stability of variable selection methods for random forest modeling in ecology.

    Science.gov (United States)

    Fox, Eric W; Hill, Ryan A; Leibowitz, Scott G; Olsen, Anthony R; Thornbrugh, Darren J; Weber, Marc H

    2017-07-01

    Random forest (RF) modeling has emerged as an important statistical learning method in ecology due to its exceptional predictive performance. However, for large and complex ecological data sets, there is limited guidance on variable selection methods for RF modeling. Typically, either a preselected set of predictor variables are used or stepwise procedures are employed which iteratively remove variables according to their importance measures. This paper investigates the application of variable selection methods to RF models for predicting probable biological stream condition. Our motivating data set consists of the good/poor condition of n = 1365 stream survey sites from the 2008/2009 National Rivers and Stream Assessment, and a large set (p = 212) of landscape features from the StreamCat data set as potential predictors. We compare two types of RF models: a full variable set model with all 212 predictors and a reduced variable set model selected using a backward elimination approach. We assess model accuracy using RF's internal out-of-bag estimate, and a cross-validation procedure with validation folds external to the variable selection process. We also assess the stability of the spatial predictions generated by the RF models to changes in the number of predictors and argue that model selection needs to consider both accuracy and stability. The results suggest that RF modeling is robust to the inclusion of many variables of moderate to low importance. We found no substantial improvement in cross-validated accuracy as a result of variable reduction. Moreover, the backward elimination procedure tended to select too few variables and exhibited numerous issues such as upwardly biased out-of-bag accuracy estimates and instabilities in the spatial predictions. We use simulations to further support and generalize results from the analysis of real data. A main purpose of this work is to elucidate issues of model selection bias and instability to ecologists interested in

  5. Changes in Southern Hemisphere circulation variability in climate change modelling experiments

    International Nuclear Information System (INIS)

    Grainger, Simon; Frederiksen, Carsten; Zheng, Xiaogu

    2007-01-01

    Full text: The seasonal mean of a climate variable can be considered as a statistical random variable, consisting of a signal and noise components (Madden 1976). The noise component consists of internal intraseasonal variability, and is not predictable on time-scales of a season or more ahead. The signal consists of slowly varying external and internal variability, and is potentially predictable on seasonal time-scales. The method of Zheng and Frederiksen (2004) has been applied to monthly time series of 500hPa Geopotential height from models submitted to the Coupled Model Intercomparison Project (CMIP3) experiment to obtain covariance matrices of the intraseasonal and slow components of covariability for summer and winter. The Empirical Orthogonal Functions (EOFs) of the intraseasonal and slow covariance matrices for the second half of the 20th century are compared with those observed by Frederiksen and Zheng (2007). The leading EOF in summer and winter for both the intraseasonal and slow components of covariability is the Southern Annular Mode (see, e.g. Kiladis and Mo 1998). This is generally reproduced by the CMIP3 models, although with different variance amounts. The observed secondary intraseasonal covariability modes of wave 4 patterns in summer and wave 3 or blocking in winter are also generally seen in the models, although the actual spatial pattern is different. For the slow covariabilty, the models are less successful in reproducing the two observed ENSO modes, with generally only one of them being represented among the leading EOFs. However, most models reproduce the observed South Pacific wave pattern. The intraseasonal and slow covariances matrices of 500hPa geopotential height under three climate change scenarios are also analysed and compared with those found for the second half of the 20th century. Through aggregating the results from a number of CMIP3 models, a consensus estimate of the changes in Southern Hemisphere variability, and their

  6. Modeling key processes causing climate change and variability

    Energy Technology Data Exchange (ETDEWEB)

    Henriksson, S.

    2013-09-01

    Greenhouse gas warming, internal climate variability and aerosol climate effects are studied and the importance to understand these key processes and being able to separate their influence on the climate is discussed. Aerosol-climate model ECHAM5-HAM and the COSMOS millennium model consisting of atmospheric, ocean and carbon cycle and land-use models are applied and results compared to measurements. Topics at focus are climate sensitivity, quasiperiodic variability with a period of 50-80 years and variability at other timescales, climate effects due to aerosols over India and climate effects of northern hemisphere mid- and high-latitude volcanic eruptions. The main findings of this work are (1) pointing out the remaining challenges in reducing climate sensitivity uncertainty from observational evidence, (2) estimates for the amplitude of a 50-80 year quasiperiodic oscillation in global mean temperature ranging from 0.03 K to 0.17 K and for its phase progression as well as the synchronising effect of external forcing, (3) identifying a power law shape S(f) {proportional_to} f-{alpha} for the spectrum of global mean temperature with {alpha} {approx} 0.8 between multidecadal and El Nino timescales with a smaller exponent in modelled climate without external forcing, (4) separating aerosol properties and climate effects in India by season and location (5) the more efficient dispersion of secondary sulfate aerosols than primary carbonaceous aerosols in the simulations, (6) an increase in monsoon rainfall in northern India due to aerosol light absorption and a probably larger decrease due to aerosol dimming effects and (7) an estimate of mean maximum cooling of 0.19 K due to larger northern hemisphere mid- and high-latitude volcanic eruptions. The results could be applied or useful in better isolating the human-caused climate change signal, in studying the processes further and in more detail, in decadal climate prediction, in model evaluation and in emission policy

  7. Latent variable models are network models.

    Science.gov (United States)

    Molenaar, Peter C M

    2010-06-01

    Cramer et al. present an original and interesting network perspective on comorbidity and contrast this perspective with a more traditional interpretation of comorbidity in terms of latent variable theory. My commentary focuses on the relationship between the two perspectives; that is, it aims to qualify the presumed contrast between interpretations in terms of networks and latent variables.

  8. Glucose variability negatively impacts long-term functional outcome in patients with traumatic brain injury.

    Science.gov (United States)

    Matsushima, Kazuhide; Peng, Monica; Velasco, Carlos; Schaefer, Eric; Diaz-Arrastia, Ramon; Frankel, Heidi

    2012-04-01

    Significant glycemic excursions (so-called glucose variability) affect the outcome of generic critically ill patients but has not been well studied in patients with traumatic brain injury (TBI). The purpose of this study was to evaluate the impact of glucose variability on long-term functional outcome of patients with TBI. A noncomputerized tight glucose control protocol was used in our intensivist model surgical intensive care unit. The relationship between the glucose variability and long-term (a median of 6 months after injury) functional outcome defined by extended Glasgow Outcome Scale (GOSE) was analyzed using ordinal logistic regression models. Glucose variability was defined by SD and percentage of excursion (POE) from the preset range glucose level. A total of 109 patients with TBI under tight glucose control had long-term GOSE evaluated. In univariable analysis, there was a significant association between lower GOSE score and higher mean glucose, higher SD, POE more than 60, POE 80 to 150, and single episode of glucose less than 60 mg/dL but not POE 80 to 110. After adjusting for possible confounding variables in multivariable ordinal logistic regression models, higher SD, POE more than 60, POE 80 to 150, and single episode of glucose less than 60 mg/dL were significantly associated with lower GOSE score. Glucose variability was significantly associated with poorer long-term functional outcome in patients with TBI as measured by the GOSE score. Well-designed protocols to minimize glucose variability may be key in improving long-term functional outcome. Copyright © 2012 Elsevier Inc. All rights reserved.

  9. a Latent Variable Path Analysis Model of Secondary Physics Enrollments in New York State.

    Science.gov (United States)

    Sobolewski, Stanley John

    The Percentage of Enrollment in Physics (PEP) at the secondary level nationally has been approximately 20% for the past few decades. For a more scientifically literate citizenry as well as specialists to continue scientific research and development, it is desirable that more students enroll in physics. Some of the predictor variables for physics enrollment and physics achievement that have been identified previously includes a community's socioeconomic status, the availability of physics, the sex of the student, the curriculum, as well as teacher and student data. This study isolated and identified predictor variables for PEP of secondary schools in New York. Data gathered by the State Education Department for the 1990-1991 school year was used. The source of this data included surveys completed by teachers and administrators on student characteristics and school facilities. A data analysis similar to that done by Bryant (1974) was conducted to determine if the relationships between a set of predictor variables related to physics enrollment had changed in the past 20 years. Variables which were isolated included: community, facilities, teacher experience, number of type of science courses, school size and school science facilities. When these variables were isolated, latent variable path diagrams were proposed and verified by the Linear Structural Relations computer modeling program (LISREL). These diagrams differed from those developed by Bryant in that there were more manifest variables used which included achievement scores in the form of Regents exam results. Two criterion variables were used, percentage of students enrolled in physics (PEP) and percent of students enrolled passing the Regents physics exam (PPP). The first model treated school and community level variables as exogenous while the second model treated only the community level variables as exogenous. The goodness of fit indices for the models was 0.77 for the first model and 0.83 for the second

  10. Modular Software-Defined Radio

    Directory of Open Access Journals (Sweden)

    Rhiemeier Arnd-Ragnar

    2005-01-01

    Full Text Available In view of the technical and commercial boundary conditions for software-defined radio (SDR, it is suggestive to reconsider the concept anew from an unconventional point of view. The organizational principles of signal processing (rather than the signal processing algorithms themselves are the main focus of this work on modular software-defined radio. Modularity and flexibility are just two key characteristics of the SDR environment which extend smoothly into the modeling of hardware and software. In particular, the proposed model of signal processing software includes irregular, connected, directed, acyclic graphs with random node weights and random edges. Several approaches for mapping such software to a given hardware are discussed. Taking into account previous findings as well as new results from system simulations presented here, the paper finally concludes with the utility of pipelining as a general design guideline for modular software-defined radio.

  11. Modeling and performance analysis for composite network–compute service provisioning in software-defined cloud environments

    Directory of Open Access Journals (Sweden)

    Qiang Duan

    2015-08-01

    Full Text Available The crucial role of networking in Cloud computing calls for a holistic vision of both networking and computing systems that leads to composite network–compute service provisioning. Software-Defined Network (SDN is a fundamental advancement in networking that enables network programmability. SDN and software-defined compute/storage systems form a Software-Defined Cloud Environment (SDCE that may greatly facilitate composite network–compute service provisioning to Cloud users. Therefore, networking and computing systems need to be modeled and analyzed as composite service provisioning systems in order to obtain thorough understanding about service performance in SDCEs. In this paper, a novel approach for modeling composite network–compute service capabilities and a technique for evaluating composite network–compute service performance are developed. The analytic method proposed in this paper is general and agnostic to service implementation technologies; thus is applicable to a wide variety of network–compute services in SDCEs. The results obtained in this paper provide useful guidelines for federated control and management of networking and computing resources to achieve Cloud service performance guarantees.

  12. The Spiral-Interactive Program Evaluation Model.

    Science.gov (United States)

    Khaleel, Ibrahim Adamu

    1988-01-01

    Describes the spiral interactive program evaluation model, which is designed to evaluate vocational-technical education programs in secondary schools in Nigeria. Program evaluation is defined; utility oriented and process oriented models for evaluation are described; and internal and external evaluative factors and variables that define each…

  13. Solving quantum optimal control problems using Clebsch variables and Lin constraints

    Science.gov (United States)

    Delgado-Téllez, M.; Ibort, A.; Rodríguez de la Peña, T.

    2018-01-01

    Clebsch variables (and Lin constraints) are applied to the study of a class of optimal control problems for affine-controlled quantum systems. The optimal control problem will be modelled with controls defined on an auxiliary space where the dynamical group of the system acts freely. The reciprocity between both theories: the classical theory defined by the objective functional and the quantum system, is established by using a suitable version of Lagrange’s multipliers theorem and a geometrical interpretation of the constraints of the system as defining a subspace of horizontal curves in an associated bundle. It is shown how the solutions of the variational problem defined by the objective functional determine solutions of the quantum problem. Then a new way of obtaining explicit solutions for a family of optimal control problems for affine-controlled quantum systems (finite or infinite dimensional) is obtained. One of its main advantages, is the the use of Clebsch variables allows to compute such solutions from solutions of invariant problems that can often be computed explicitly. This procedure can be presented as an algorithm that can be applied to a large class of systems. Finally, some simple examples, spin control, a simple quantum Hamiltonian with an ‘Elroy beanie’ type classical model and a controlled one-dimensional quantum harmonic oscillator, illustrating the main features of the theory, will be discussed.

  14. Variability of concrete properties: experimental characterisation and probabilistic modelling for calcium leaching

    International Nuclear Information System (INIS)

    De Larrard, Th.

    2010-09-01

    Evaluating structures durability requires taking into account the variability of material properties. The thesis has two main aspects: on the one hand, an experimental campaign aimed at quantifying the variability of many indicators of concrete behaviour; on the other hand, a simple numerical model for calcium leaching is developed in order to implement probabilistic methods so as to estimate the lifetime of structures such as those related to radioactive waste disposal. The experimental campaign consisted in following up two real building sites, and quantifying the variability of these indicators, studying their correlation, and characterising the random fields variability for the considered variables (especially the correlation length). To draw any conclusion from the accelerated leaching tests with ammonium nitrate by overcoming the effects of temperature, an inverse analysis tool based on the theory of artificial neural networks was developed. Simple numerical tools are presented to investigate the propagation of variability in durability issues, quantify the influence of this variability on the lifespan of structures and explain the variability of the input parameters of the numerical model and the physical measurable quantities of the material. (author)

  15. The Integration of Continuous and Discrete Latent Variable Models: Potential Problems and Promising Opportunities

    Science.gov (United States)

    Bauer, Daniel J.; Curran, Patrick J.

    2004-01-01

    Structural equation mixture modeling (SEMM) integrates continuous and discrete latent variable models. Drawing on prior research on the relationships between continuous and discrete latent variable models, the authors identify 3 conditions that may lead to the estimation of spurious latent classes in SEMM: misspecification of the structural model,…

  16. Environmental variability and its relationship to site index in Mediterranean maritine pine

    Energy Technology Data Exchange (ETDEWEB)

    Bravo-Oviedo, A.; Roig, S.; Bravo, F.; Montero, G.; Rio, M. del

    2011-07-01

    Environmental variability and site productivity relationships, estimated by means of soil-site equations, are considered a milestone in decision making of forest management. The adequacy of silviculture systems is related to tree response to environmental conditions. The objectives of this paper are to study climatic and edaphic variability in Mediterranean Maritime pine (Pinus pinaster) forests in Spain, and the practical use of such variability in determining forest productivity by means of site index estimation. Principal component analysis was used to describe environmental conditions and patterns. Site index predictive models were fitted using partial least squares and parsimoniously by ordinary least square. Climatic variables along with parent material defined an ecological regionalization from warm and humid to cold and dry sites. Results showed that temperature and precipitation in autumn and winter, along with longitudinal gradient define extreme site qualities. The best qualities are located in warm and humid sites whereas the poorest ones are found in cold and dry regions. Site index values are poorly explained by soil properties. However, clay content in the first mineral horizon improved the soil-site model considerably. Climate is the main driver of productivity of Mediterranean Maritime pine in a broad scale. Site index differences within a homogenous climatic region are associated to soil properties. (Author) 47 refs.

  17. Rapid Estimation Method for State of Charge of Lithium-Ion Battery Based on Fractional Continual Variable Order Model

    Directory of Open Access Journals (Sweden)

    Xin Lu

    2018-03-01

    Full Text Available In recent years, the fractional order model has been employed to state of charge (SOC estimation. The non integer differentiation order being expressed as a function of recursive factors defining the fractality of charge distribution on porous electrodes. The battery SOC affects the fractal dimension of charge distribution, therefore the order of the fractional order model varies with the SOC at the same condition. This paper proposes a new method to estimate the SOC. A fractional continuous variable order model is used to characterize the fractal morphology of charge distribution. The order identification results showed that there is a stable monotonic relationship between the fractional order and the SOC after the battery inner electrochemical reaction reaches balanced. This feature makes the proposed model particularly suitable for SOC estimation when the battery is in the resting state. Moreover, a fast iterative method based on the proposed model is introduced for SOC estimation. The experimental results showed that the proposed iterative method can quickly estimate the SOC by several iterations while maintaining high estimation accuracy.

  18. Modeling Source Water TOC Using Hydroclimate Variables and Local Polynomial Regression.

    Science.gov (United States)

    Samson, Carleigh C; Rajagopalan, Balaji; Summers, R Scott

    2016-04-19

    To control disinfection byproduct (DBP) formation in drinking water, an understanding of the source water total organic carbon (TOC) concentration variability can be critical. Previously, TOC concentrations in water treatment plant source waters have been modeled using streamflow data. However, the lack of streamflow data or unimpaired flow scenarios makes it difficult to model TOC. In addition, TOC variability under climate change further exacerbates the problem. Here we proposed a modeling approach based on local polynomial regression that uses climate, e.g. temperature, and land surface, e.g., soil moisture, variables as predictors of TOC concentration, obviating the need for streamflow. The local polynomial approach has the ability to capture non-Gaussian and nonlinear features that might be present in the relationships. The utility of the methodology is demonstrated using source water quality and climate data in three case study locations with surface source waters including river and reservoir sources. The models show good predictive skill in general at these locations, with lower skills at locations with the most anthropogenic influences in their streams. Source water TOC predictive models can provide water treatment utilities important information for making treatment decisions for DBP regulation compliance under future climate scenarios.

  19. Calibration of a user-defined mine blast model in LSDYNA and comparison with ale simultions

    NARCIS (Netherlands)

    Verreault, J.; Leerdam, P.J.C.; Weerheijm, J.

    2016-01-01

    The calibration of a user-defined blast model implemented in LS-DYNA is presented using full-scale test rig experiments, partly according to the NATO STANAG 4569 AEP-55 Volume 2 specifications where the charge weight varies between 6 kg and 10 kg and the burial depth is 100 mm and deeper. The model

  20. Linking Inflammation, Cardiorespiratory Variability, and Neural Control in Acute Inflammation via Computational Modeling.

    Science.gov (United States)

    Dick, Thomas E; Molkov, Yaroslav I; Nieman, Gary; Hsieh, Yee-Hsee; Jacono, Frank J; Doyle, John; Scheff, Jeremy D; Calvano, Steve E; Androulakis, Ioannis P; An, Gary; Vodovotz, Yoram

    2012-01-01

    Acute inflammation leads to organ failure by engaging catastrophic feedback loops in which stressed tissue evokes an inflammatory response and, in turn, inflammation damages tissue. Manifestations of this maladaptive inflammatory response include cardio-respiratory dysfunction that may be reflected in reduced heart rate and ventilatory pattern variabilities. We have developed signal-processing algorithms that quantify non-linear deterministic characteristics of variability in biologic signals. Now, coalescing under the aegis of the NIH Computational Biology Program and the Society for Complexity in Acute Illness, two research teams performed iterative experiments and computational modeling on inflammation and cardio-pulmonary dysfunction in sepsis as well as on neural control of respiration and ventilatory pattern variability. These teams, with additional collaborators, have recently formed a multi-institutional, interdisciplinary consortium, whose goal is to delineate the fundamental interrelationship between the inflammatory response and physiologic variability. Multi-scale mathematical modeling and complementary physiological experiments will provide insight into autonomic neural mechanisms that may modulate the inflammatory response to sepsis and simultaneously reduce heart rate and ventilatory pattern variabilities associated with sepsis. This approach integrates computational models of neural control of breathing and cardio-respiratory coupling with models that combine inflammation, cardiovascular function, and heart rate variability. The resulting integrated model will provide mechanistic explanations for the phenomena of respiratory sinus-arrhythmia and cardio-ventilatory coupling observed under normal conditions, and the loss of these properties during sepsis. This approach holds the potential of modeling cross-scale physiological interactions to improve both basic knowledge and clinical management of acute inflammatory diseases such as sepsis and trauma.

  1. Environmental drivers defining linkages among life-history traits: mechanistic insights from a semiterrestrial amphipod subjected to macroscale gradients.

    Science.gov (United States)

    Gómez, Julio; Barboza, Francisco R; Defeo, Omar

    2013-10-01

    Determining the existence of interconnected responses among life-history traits and identifying underlying environmental drivers are recognized as key goals for understanding the basis of phenotypic variability. We studied potentially interconnected responses among senescence, fecundity, embryos size, weight of brooding females, size at maturity and sex ratio in a semiterrestrial amphipod affected by macroscale gradients in beach morphodynamics and salinity. To this end, multiple modelling processes based on generalized additive mixed models were used to deal with the spatio-temporal structure of the data obtained at 10 beaches during 22 months. Salinity was the only nexus among life-history traits, suggesting that this physiological stressor influences the energy balance of organisms. Different salinity scenarios determined shifts in the weight of brooding females and size at maturity, having consequences in the number and size of embryos which in turn affected sex determination and sex ratio at the population level. Our work highlights the importance of analysing field data to find the variables and potential mechanisms that define concerted responses among traits, therefore defining life-history strategies.

  2. An Atmospheric Variability Model for Venus Aerobraking Missions

    Science.gov (United States)

    Tolson, Robert T.; Prince, Jill L. H.; Konopliv, Alexander A.

    2013-01-01

    Aerobraking has proven to be an enabling technology for planetary missions to Mars and has been proposed to enable low cost missions to Venus. Aerobraking saves a significant amount of propulsion fuel mass by exploiting atmospheric drag to reduce the eccentricity of the initial orbit. The solar arrays have been used as the primary drag surface and only minor modifications have been made in the vehicle design to accommodate the relatively modest aerothermal loads. However, if atmospheric density is highly variable from orbit to orbit, the mission must either accept higher aerothermal risk, a slower pace for aerobraking, or a tighter corridor likely with increased propulsive cost. Hence, knowledge of atmospheric variability is of great interest for the design of aerobraking missions. The first planetary aerobraking was at Venus during the Magellan mission. After the primary Magellan science mission was completed, aerobraking was used to provide a more circular orbit to enhance gravity field recovery. Magellan aerobraking took place between local solar times of 1100 and 1800 hrs, and it was found that the Venusian atmospheric density during the aerobraking phase had less than 10% 1 sigma orbit to orbit variability. On the other hand, at some latitudes and seasons, Martian variability can be as high as 40% 1 sigmaFrom both the MGN and PVO mission it was known that the atmosphere, above aerobraking altitudes, showed greater variability at night, but this variability was never quantified in a systematic manner. This paper proposes a model for atmospheric variability that can be used for aerobraking mission design until more complete data sets become available.

  3. Appraisal and Reliability of Variable Engagement Model Prediction ...

    African Journals Online (AJOL)

    The variable engagement model based on the stress - crack opening displacement relationship and, which describes the behaviour of randomly oriented steel fibres composite subjected to uniaxial tension has been evaluated so as to determine the safety indices associated when the fibres are subjected to pullout and with ...

  4. Towards multi-resolution global climate modeling with ECHAM6-FESOM. Part II: climate variability

    Science.gov (United States)

    Rackow, T.; Goessling, H. F.; Jung, T.; Sidorenko, D.; Semmler, T.; Barbi, D.; Handorf, D.

    2018-04-01

    This study forms part II of two papers describing ECHAM6-FESOM, a newly established global climate model with a unique multi-resolution sea ice-ocean component. While part I deals with the model description and the mean climate state, here we examine the internal climate variability of the model under constant present-day (1990) conditions. We (1) assess the internal variations in the model in terms of objective variability performance indices, (2) analyze variations in global mean surface temperature and put them in context to variations in the observed record, with particular emphasis on the recent warming slowdown, (3) analyze and validate the most common atmospheric and oceanic variability patterns, (4) diagnose the potential predictability of various climate indices, and (5) put the multi-resolution approach to the test by comparing two setups that differ only in oceanic resolution in the equatorial belt, where one ocean mesh keeps the coarse 1° resolution applied in the adjacent open-ocean regions and the other mesh is gradually refined to 0.25°. Objective variability performance indices show that, in the considered setups, ECHAM6-FESOM performs overall favourably compared to five well-established climate models. Internal variations of the global mean surface temperature in the model are consistent with observed fluctuations and suggest that the recent warming slowdown can be explained as a once-in-one-hundred-years event caused by internal climate variability; periods of strong cooling in the model (`hiatus' analogs) are mainly associated with ENSO-related variability and to a lesser degree also to PDO shifts, with the AMO playing a minor role. Common atmospheric and oceanic variability patterns are simulated largely consistent with their real counterparts. Typical deficits also found in other models at similar resolutions remain, in particular too weak non-seasonal variability of SSTs over large parts of the ocean and episodic periods of almost absent

  5. Clinical prediction in defined populations: a simulation study investigating when and how to aggregate existing models

    Directory of Open Access Journals (Sweden)

    Glen P. Martin

    2017-01-01

    Full Text Available Abstract Background Clinical prediction models (CPMs are increasingly deployed to support healthcare decisions but they are derived inconsistently, in part due to limited data. An emerging alternative is to aggregate existing CPMs developed for similar settings and outcomes. This simulation study aimed to investigate the impact of between-population-heterogeneity and sample size on aggregating existing CPMs in a defined population, compared with developing a model de novo. Methods Simulations were designed to mimic a scenario in which multiple CPMs for a binary outcome had been derived in distinct, heterogeneous populations, with potentially different predictors available in each. We then generated a new ‘local’ population and compared the performance of CPMs developed for this population by aggregation, using stacked regression, principal component analysis or partial least squares, with redevelopment from scratch using backwards selection and penalised regression. Results While redevelopment approaches resulted in models that were miscalibrated for local datasets of less than 500 observations, model aggregation methods were well calibrated across all simulation scenarios. When the size of local data was less than 1000 observations and between-population-heterogeneity was small, aggregating existing CPMs gave better discrimination and had the lowest mean square error in the predicted risks compared with deriving a new model. Conversely, given greater than 1000 observations and significant between-population-heterogeneity, then redevelopment outperformed the aggregation approaches. In all other scenarios, both aggregation and de novo derivation resulted in similar predictive performance. Conclusion This study demonstrates a pragmatic approach to contextualising CPMs to defined populations. When aiming to develop models in defined populations, modellers should consider existing CPMs, with aggregation approaches being a suitable modelling

  6. Parameterized combinatorial geometry modeling in Moritz

    International Nuclear Information System (INIS)

    Van Riper, K.A.

    2005-01-01

    We describe the use of named variables as surface and solid body coefficients in the Moritz geometry editing program. Variables can also be used as material numbers, cell densities, and transformation values. A variable is defined as a constant or an arithmetic combination of constants and other variables. A variable reference, such as in a surface coefficient, can be a single variable or an expression containing variables and constants. Moritz can read and write geometry models in MCNP and ITS ACCEPT format; support for other codes will be added. The geometry can be saved with either the variables in place, for modifying the models in Moritz, or with the variables evaluated for use in the transport codes. A program window shows a list of variables and provides fields for editing them. Surface coefficients and other values that use a variable reference are shown in a distinctive style on object property dialogs; associated buttons show fields for editing the reference. We discuss our use of variables in defining geometry models for shielding studies in PET clinics. When a model is parameterized through the use of variables, changes such as room dimensions, shielding layer widths, and cell compositions can be quickly achieved by changing a few numbers without requiring knowledge of the input syntax for the transport code or the tedious and error prone work of recalculating many surface or solid body coefficients. (author)

  7. Viscous cosmological models with a variable cosmological term ...

    African Journals Online (AJOL)

    Einstein's field equations for a Friedmann-Lamaitre Robertson-Walker universe filled with a dissipative fluid with a variable cosmological term L described by full Israel-Stewart theory are considered. General solutions to the field equations for the flat case have been obtained. The solution corresponds to the dust free model ...

  8. Automatic Welding Control Using a State Variable Model.

    Science.gov (United States)

    1979-06-01

    A-A10 610 NAVEAL POSTGRADUATE SCH4O.M CEAY CA0/ 13/ SAUTOMATIC WELDING CONTROL USING A STATE VARIABLE MODEL.W()JUN 79 W V "my UNCLASSIFIED...taverse Drive Unit // Jbint Path /Fixed Track 34 (servomotor positioning). Additional controls of heave (vertical), roll (angular rotation about the

  9. Sensitivity Modeling of On-chip Capacitances : Parasitics Extraction for Manufacturing Variability

    NARCIS (Netherlands)

    Bi, Y.

    2012-01-01

    With each new generation of IC process technologies, the impact of manufacturing variability is increasing. As such, design optimality is harder and harder to achieve and effective modeling tools and methods are needed to capture the effects of variability in such a way that it is understandable and

  10. Effect of land model ensemble versus coupled model ensemble on the simulation of precipitation climatology and variability

    Science.gov (United States)

    Wei, Jiangfeng; Dirmeyer, Paul A.; Yang, Zong-Liang; Chen, Haishan

    2017-10-01

    Through a series of model simulations with an atmospheric general circulation model coupled to three different land surface models, this study investigates the impacts of land model ensembles and coupled model ensemble on precipitation simulation. It is found that coupling an ensemble of land models to an atmospheric model has a very minor impact on the improvement of precipitation climatology and variability, but a simple ensemble average of the precipitation from three individually coupled land-atmosphere models produces better results, especially for precipitation variability. The generally weak impact of land processes on precipitation should be the main reason that the land model ensembles do not improve precipitation simulation. However, if there are big biases in the land surface model or land surface data set, correcting them could improve the simulated climate, especially for well-constrained regional climate simulations.

  11. A New Variable Selection Method Based on Mutual Information Maximization by Replacing Collinear Variables for Nonlinear Quantitative Structure-Property Relationship Models

    Energy Technology Data Exchange (ETDEWEB)

    Ghasemi, Jahan B.; Zolfonoun, Ehsan [Toosi University of Technology, Tehran (Korea, Republic of)

    2012-05-15

    Selection of the most informative molecular descriptors from the original data set is a key step for development of quantitative structure activity/property relationship models. Recently, mutual information (MI) has gained increasing attention in feature selection problems. This paper presents an effective mutual information-based feature selection approach, named mutual information maximization by replacing collinear variables (MIMRCV), for nonlinear quantitative structure-property relationship models. The proposed variable selection method was applied to three different QSPR datasets, soil degradation half-life of 47 organophosphorus pesticides, GC-MS retention times of 85 volatile organic compounds, and water-to-micellar cetyltrimethylammonium bromide partition coefficients of 62 organic compounds.The obtained results revealed that using MIMRCV as feature selection method improves the predictive quality of the developed models compared to conventional MI based variable selection algorithms.

  12. A New Variable Selection Method Based on Mutual Information Maximization by Replacing Collinear Variables for Nonlinear Quantitative Structure-Property Relationship Models

    International Nuclear Information System (INIS)

    Ghasemi, Jahan B.; Zolfonoun, Ehsan

    2012-01-01

    Selection of the most informative molecular descriptors from the original data set is a key step for development of quantitative structure activity/property relationship models. Recently, mutual information (MI) has gained increasing attention in feature selection problems. This paper presents an effective mutual information-based feature selection approach, named mutual information maximization by replacing collinear variables (MIMRCV), for nonlinear quantitative structure-property relationship models. The proposed variable selection method was applied to three different QSPR datasets, soil degradation half-life of 47 organophosphorus pesticides, GC-MS retention times of 85 volatile organic compounds, and water-to-micellar cetyltrimethylammonium bromide partition coefficients of 62 organic compounds.The obtained results revealed that using MIMRCV as feature selection method improves the predictive quality of the developed models compared to conventional MI based variable selection algorithms

  13. Modelling the Spatial Isotope Variability of Precipitation in Syria

    Energy Technology Data Exchange (ETDEWEB)

    Kattan, Z.; Kattaa, B. [Department of Geology, Atomic Energy Commission of Syria (AECS), Damascus (Syrian Arab Republic)

    2013-07-15

    Attempts were made to model the spatial variability of environmental isotope ({sup 18}O, {sup 2}H and {sup 3}H) compositions of precipitation in syria. Rainfall samples periodically collected on a monthly basis from 16 different stations were used for processing and demonstrating the spatial distributions of these isotopes, together with those of deuterium excess (d) values. Mathematically, the modelling process was based on applying simple polynomial models that take into consideration the effects of major geographic factors (Lon.E., Lat.N., and altitude). The modelling results of spatial distribution of stable isotopes ({sup 18}O and {sup 2}H) were generally good, as shown from the high correlation coefficients (R{sup 2} = 0.7-0.8), calculated between the observed and predicted values. In the case of deuterium excess and tritium distributions, the results were most likely approximates (R{sup 2} = 0.5-0.6). Improving the simulation of spatial isotope variability probably requires the incorporation of other local meteorological factors, such as relative air humidity, precipitation amount and vapour pressure, which are supposed to play an important role in such an arid country. (author)

  14. Circular Business Models: Defining a Concept and Framing an Emerging Research Field

    Directory of Open Access Journals (Sweden)

    Julia L. K. Nußholz

    2017-10-01

    Full Text Available To aid companies in transitioning towards a circular economy and adopting strategies such as reuse, repair, and remanufacturing, the concept of circular business models has been developed. Although the concept draws on contributions from various academic disciplines, and despite its increasingly frequent use, few scholars clearly define what a circular business model is. Understanding about what makes a business model circular is diverse, hampering the theoretical development and practical application of circular business models. This study aims to help frame the field of circular business model research, by clarifying the fundamentals of the concept from the perspectives of resource efficiency and business model innovation. Expanding on these findings, a review of how the concept is used in recent academic literature is provided. It shows that a coherent view is lacking on which resource efficiency strategies classify a business model as circular. This study clarifies which resource efficiency strategies can be deemed as relevant key strategies for circular business models, and suggests a new definition of the concept. With the definition grounded in analysis of the fundamentals in terms of resource efficiency and business models, the study contributes to theoretical advancement and effective implementation of circular business models.

  15. Simple model for crop photosynthesis in terms of weather variables ...

    African Journals Online (AJOL)

    A theoretical mathematical model for describing crop photosynthetic rate in terms of the weather variables and crop characteristics is proposed. The model utilizes a series of efficiency parameters, each of which reflect the fraction of possible photosynthetic rate permitted by the different weather elements or crop architecture.

  16. Model for expressing leaf photosynthesis in terms of weather variables

    African Journals Online (AJOL)

    A theoretical mathematical model for describing photosynthesis in individual leaves in terms of weather variables is proposed. The model utilizes a series of efficiency parameters, each of which reflect the fraction of potential photosynthetic rate permitted by the different environmental elements. These parameters are useful ...

  17. A new model of wheezing severity in young children using the validated ISAAC wheezing module: A latent variable approach with validation in independent cohorts.

    Science.gov (United States)

    Brunwasser, Steven M; Gebretsadik, Tebeb; Gold, Diane R; Turi, Kedir N; Stone, Cosby A; Datta, Soma; Gern, James E; Hartert, Tina V

    2018-01-01

    The International Study of Asthma and Allergies in Children (ISAAC) Wheezing Module is commonly used to characterize pediatric asthma in epidemiological studies, including nearly all airway cohorts participating in the Environmental Influences on Child Health Outcomes (ECHO) consortium. However, there is no consensus model for operationalizing wheezing severity with this instrument in explanatory research studies. Severity is typically measured using coarsely-defined categorical variables, reducing power and potentially underestimating etiological associations. More precise measurement approaches could improve testing of etiological theories of wheezing illness. We evaluated a continuous latent variable model of pediatric wheezing severity based on four ISAAC Wheezing Module items. Analyses included subgroups of children from three independent cohorts whose parents reported past wheezing: infants ages 0-2 in the INSPIRE birth cohort study (Cohort 1; n = 657), 6-7-year-old North American children from Phase One of the ISAAC study (Cohort 2; n = 2,765), and 5-6-year-old children in the EHAAS birth cohort study (Cohort 3; n = 102). Models were estimated using structural equation modeling. In all cohorts, covariance patterns implied by the latent variable model were consistent with the observed data, as indicated by non-significant χ2 goodness of fit tests (no evidence of model misspecification). Cohort 1 analyses showed that the latent factor structure was stable across time points and child sexes. In both cohorts 1 and 3, the latent wheezing severity variable was prospectively associated with wheeze-related clinical outcomes, including physician asthma diagnosis, acute corticosteroid use, and wheeze-related outpatient medical visits when adjusting for confounders. We developed an easily applicable continuous latent variable model of pediatric wheezing severity based on items from the well-validated ISAAC Wheezing Module. This model prospectively associates with

  18. User Defined Data in the New Analysis Model of the BaBar Experiment

    Energy Technology Data Exchange (ETDEWEB)

    De Nardo, G.

    2005-04-06

    The BaBar experiment has recently revised its Analysis Model. One of the key ingredient of BaBar new Analysis Model is the support of the capability to add to the Event Store user defined data, which can be the output of complex computations performed at an advanced stage of a physics analysis, and are associated to analysis objects. In order to provide flexibility and extensibility with respect to object types, template generic programming has been adopted. In this way the model is non-intrusive with respect to reconstruction and analysis objects it manages, not requiring changes in their interfaces and implementations. Technological details are hidden as much as possible to the user, providing a simple interface. In this paper we present some of the limitations of the old model and how they are addressed by the new Analysis Model.

  19. High-resolution regional climate model evaluation using variable-resolution CESM over California

    Science.gov (United States)

    Huang, X.; Rhoades, A.; Ullrich, P. A.; Zarzycki, C. M.

    2015-12-01

    Understanding the effect of climate change at regional scales remains a topic of intensive research. Though computational constraints remain a problem, high horizontal resolution is needed to represent topographic forcing, which is a significant driver of local climate variability. Although regional climate models (RCMs) have traditionally been used at these scales, variable-resolution global climate models (VRGCMs) have recently arisen as an alternative for studying regional weather and climate allowing two-way interaction between these domains without the need for nudging. In this study, the recently developed variable-resolution option within the Community Earth System Model (CESM) is assessed for long-term regional climate modeling over California. Our variable-resolution simulations will focus on relatively high resolutions for climate assessment, namely 28km and 14km regional resolution, which are much more typical for dynamically downscaled studies. For comparison with the more widely used RCM method, the Weather Research and Forecasting (WRF) model will be used for simulations at 27km and 9km. All simulations use the AMIP (Atmospheric Model Intercomparison Project) protocols. The time period is from 1979-01-01 to 2005-12-31 (UTC), and year 1979 was discarded as spin up time. The mean climatology across California's diverse climate zones, including temperature and precipitation, is analyzed and contrasted with the Weather Research and Forcasting (WRF) model (as a traditional RCM), regional reanalysis, gridded observational datasets and uniform high-resolution CESM at 0.25 degree with the finite volume (FV) dynamical core. The results show that variable-resolution CESM is competitive in representing regional climatology on both annual and seasonal time scales. This assessment adds value to the use of VRGCMs for projecting climate change over the coming century and improve our understanding of both past and future regional climate related to fine

  20. Evaluating variability and uncertainty in radiological impact assessment using SYMBIOSE

    International Nuclear Information System (INIS)

    Simon-Cornu, M.; Beaugelin-Seiller, K.; Boyer, P.; Calmon, P.; Garcia-Sanchez, L.; Mourlon, C.; Nicoulaud, V.; Sy, M.; Gonze, M.A.

    2015-01-01

    SYMBIOSE is a modelling platform that accounts for variability and uncertainty in radiological impact assessments, when simulating the environmental fate of radionuclides and assessing doses to human populations. The default database of SYMBIOSE is partly based on parameter values that are summarized within International Atomic Energy Agency (IAEA) documents. To characterize uncertainty on the transfer parameters, 331 Probability Distribution Functions (PDFs) were defined from the summary statistics provided within the IAEA documents (i.e. sample size, minimal and maximum values, arithmetic and geometric means, standard and geometric standard deviations) and are made available as spreadsheet files. The methods used to derive the PDFs without complete data sets, but merely the summary statistics, are presented. Then, a simple case-study illustrates the use of the database in a second-order Monte Carlo calculation, separating parametric uncertainty and inter-individual variability. - Highlights: • Parametric uncertainty in radioecology was derived from IAEA documents. • 331 Probability Distribution Functions were defined for transfer parameters. • Parametric uncertainty and inter-individual variability were propagated

  1. Shared Variable Oriented Parallel Precompiler for SPMD Model

    Institute of Scientific and Technical Information of China (English)

    1995-01-01

    For the moment,commercial parallel computer systems with distributed memory architecture are usually provided with parallel FORTRAN or parallel C compliers,which are just traditional sequential FORTRAN or C compilers expanded with communication statements.Programmers suffer from writing parallel programs with communication statements. The Shared Variable Oriented Parallel Precompiler (SVOPP) proposed in this paper can automatically generate appropriate communication statements based on shared variables for SPMD(Single Program Multiple Data) computation model and greatly ease the parallel programming with high communication efficiency.The core function of parallel C precompiler has been successfully verified on a transputer-based parallel computer.Its prominent performance shows that SVOPP is probably a break-through in parallel programming technique.

  2. Efficient family-based model checking via variability abstractions

    DEFF Research Database (Denmark)

    Dimovski, Aleksandar; Al-Sibahi, Ahmad Salim; Brabrand, Claus

    2016-01-01

    with the abstract model checking of the concrete high-level variational model. This allows the use of Spin with all its accumulated optimizations for efficient verification of variational models without any knowledge about variability. We have implemented the transformations in a prototype tool, and we illustrate......Many software systems are variational: they can be configured to meet diverse sets of requirements. They can produce a (potentially huge) number of related systems, known as products or variants, by systematically reusing common parts. For variational models (variational systems or families...... of related systems), specialized family-based model checking algorithms allow efficient verification of multiple variants, simultaneously, in a single run. These algorithms, implemented in a tool Snip, scale much better than ``the brute force'' approach, where all individual systems are verified using...

  3. Continuous-variable protocol for oblivious transfer in the noisy-storage model

    DEFF Research Database (Denmark)

    Furrer, Fabian; Gehring, Tobias; Schaffner, Christian

    2018-01-01

    for oblivious transfer for optical continuous-variable systems, and prove its security in the noisy-storage model. This model allows us to establish security by sending more quantum signals than an attacker can reliably store during the protocol. The security proof is based on uncertainty relations which we...... derive for continuous-variable systems, that differ from the ones used in quantum key distribution. We experimentally demonstrate in a proof-of-principle experiment the proposed oblivious transfer protocol for various channel losses by using entangled two-mode squeezed states measured with balanced...

  4. Soil Cd, Cr, Cu, Ni, Pb and Zn sorption and retention models using SVM: Variable selection and competitive model.

    Science.gov (United States)

    González Costa, J J; Reigosa, M J; Matías, J M; Covelo, E F

    2017-09-01

    The aim of this study was to model the sorption and retention of Cd, Cu, Ni, Pb and Zn in soils. To that extent, the sorption and retention of these metals were studied and the soil characterization was performed separately. Multiple stepwise regression was used to produce multivariate models with linear techniques and with support vector machines, all of which included 15 explanatory variables characterizing soils. When the R-squared values are represented, two different groups are noticed. Cr, Cu and Pb sorption and retention show a higher R-squared; the most explanatory variables being humified organic matter, Al oxides and, in some cases, cation-exchange capacity (CEC). The other group of metals (Cd, Ni and Zn) shows a lower R-squared, and clays are the most explanatory variables, including a percentage of vermiculite and slime. In some cases, quartz, plagioclase or hematite percentages also show some explanatory capacity. Support Vector Machine (SVM) regression shows that the different models are not as regular as in multiple regression in terms of number of variables, the regression for nickel adsorption being the one with the highest number of variables in its optimal model. On the other hand, there are cases where the most explanatory variables are the same for two metals, as it happens with Cd and Cr adsorption. A similar adsorption mechanism is thus postulated. These patterns of the introduction of variables in the model allow us to create explainability sequences. Those which are the most similar to the selectivity sequences obtained by Covelo (2005) are Mn oxides in multiple regression and change capacity in SVM. Among all the variables, the only one that is explanatory for all the metals after applying the maximum parsimony principle is the percentage of sand in the retention process. In the competitive model arising from the aforementioned sequences, the most intense competitiveness for the adsorption and retention of different metals appears between

  5. Variable-coefficient higher-order nonlinear Schroedinger model in optical fibers: Variable-coefficient bilinear form, Baecklund transformation, brightons and symbolic computation

    International Nuclear Information System (INIS)

    Tian Bo; Gao Yitian; Zhu Hongwu

    2007-01-01

    Symbolically investigated in this Letter is a variable-coefficient higher-order nonlinear Schroedinger (vcHNLS) model for ultrafast signal-routing, fiber laser systems and optical communication systems with distributed dispersion and nonlinearity management. Of physical and optical interests, with bilinear method extend, the vcHNLS model is transformed into a variable-coefficient bilinear form, and then an auto-Baecklund transformation is constructed. Constraints on coefficient functions are analyzed. Potentially observable with future optical-fiber experiments, variable-coefficient brightons are illustrated. Relevant properties and features are discussed as well. Baecklund transformation and other results of this Letter will be of certain value to the studies on inhomogeneous fiber media, core of dispersion-managed brightons, fiber amplifiers, laser systems and optical communication links with distributed dispersion and nonlinearity management

  6. Two-step variable selection in quantile regression models

    Directory of Open Access Journals (Sweden)

    FAN Yali

    2015-06-01

    Full Text Available We propose a two-step variable selection procedure for high dimensional quantile regressions, in which the dimension of the covariates, pn is much larger than the sample size n. In the first step, we perform ℓ1 penalty, and we demonstrate that the first step penalized estimator with the LASSO penalty can reduce the model from an ultra-high dimensional to a model whose size has the same order as that of the true model, and the selected model can cover the true model. The second step excludes the remained irrelevant covariates by applying the adaptive LASSO penalty to the reduced model obtained from the first step. Under some regularity conditions, we show that our procedure enjoys the model selection consistency. We conduct a simulation study and a real data analysis to evaluate the finite sample performance of the proposed approach.

  7. Interannual Tropical Rainfall Variability in General Circulation Model Simulations Associated with the Atmospheric Model Intercomparison Project.

    Science.gov (United States)

    Sperber, K. R.; Palmer, T. N.

    1996-11-01

    The interannual variability of rainfall over the Indian subcontinent, the African Sahel, and the Nordeste region of Brazil have been evaluated in 32 models for the period 1979-88 as part of the Atmospheric Model Intercomparison Project (AMIP). The interannual variations of Nordeste rainfall are the most readily captured, owing to the intimate link with Pacific and Atlantic sea surface temperatures. The precipitation variations over India and the Sahel are less well simulated. Additionally, an Indian monsoon wind shear index was calculated for each model. Evaluation of the interannual variability of a wind shear index over the summer monsoon region indicates that the models exhibit greater fidelity in capturing the large-scale dynamic fluctuations than the regional-scale rainfall variations. A rainfall/SST teleconnection quality control was used to objectively stratify model performance. Skill scores improved for those models that qualitatively simulated the observed rainfall/El Niño- Southern Oscillation SST correlation pattern. This subset of models also had a rainfall climatology that was in better agreement with observations, indicating a link between systematic model error and the ability to simulate interannual variations.A suite of six European Centre for Medium-Range Weather Forecasts (ECMWF) AMIP runs (differing only in their initial conditions) have also been examined. As observed, all-India rainfall was enhanced in 1988 relative to 1987 in each of these realizations. All-India rainfall variability during other years showed little or no predictability, possibly due to internal chaotic dynamics associated with intraseasonal monsoon fluctuations and/or unpredictable land surface process interactions. The interannual variations of Nordeste rainfall were best represented. The State University of New York at Albany/National Center for Atmospheric Research Genesis model was run in five initial condition realizations. In this model, the Nordeste rainfall

  8. Exploratory Long-Range Models to Estimate Summer Climate Variability over Southern Africa.

    Science.gov (United States)

    Jury, Mark R.; Mulenga, Henry M.; Mason, Simon J.

    1999-07-01

    Teleconnection predictors are explored using multivariate regression models in an effort to estimate southern African summer rainfall and climate impacts one season in advance. The preliminary statistical formulations include many variables influenced by the El Niño-Southern Oscillation (ENSO) such as tropical sea surface temperatures (SST) in the Indian and Atlantic Oceans. Atmospheric circulation responses to ENSO include the alternation of tropical zonal winds over Africa and changes in convective activity within oceanic monsoon troughs. Numerous hemispheric-scale datasets are employed to extract predictors and include global indexes (Southern Oscillation index and quasi-biennial oscillation), SST principal component scores for the global oceans, indexes of tropical convection (outgoing longwave radiation), air pressure, and surface and upper winds over the Indian and Atlantic Oceans. Climatic targets include subseasonal, area-averaged rainfall over South Africa and the Zambezi river basin, and South Africa's annual maize yield. Predictors and targets overlap in the years 1971-93, the defined training period. Each target time series is fitted by an optimum group of predictors from the preceding spring, in a linear multivariate formulation. To limit artificial skill, predictors are restricted to three, providing 17 degrees of freedom. Models with colinear predictors are screened out, and persistence of the target time series is considered. The late summer rainfall models achieve a mean r2 fit of 72%, contributed largely through ENSO modulation. Early summer rainfall cross validation correlations are lower (61%). A conceptual understanding of the climate dynamics and ocean-atmosphere coupling processes inherent in the exploratory models is outlined.Seasonal outlooks based on the exploratory models could help mitigate the impacts of southern Africa's fluctuating climate. It is believed that an advance warning of drought risk and seasonal rainfall prospects will

  9. Environmental versus demographic variability in stochastic predator–prey models

    International Nuclear Information System (INIS)

    Dobramysl, U; Täuber, U C

    2013-01-01

    In contrast to the neutral population cycles of the deterministic mean-field Lotka–Volterra rate equations, including spatial structure and stochastic noise in models for predator–prey interactions yields complex spatio-temporal structures associated with long-lived erratic population oscillations. Environmental variability in the form of quenched spatial randomness in the predation rates results in more localized activity patches. Our previous study showed that population fluctuations in rare favorable regions in turn cause a remarkable increase in the asymptotic densities of both predators and prey. Very intriguing features are found when variable interaction rates are affixed to individual particles rather than lattice sites. Stochastic dynamics with demographic variability in conjunction with inheritable predation efficiencies generate non-trivial time evolution for the predation rate distributions, yet with overall essentially neutral optimization. (paper)

  10. Assessing geotechnical centrifuge modelling in addressing variably saturated flow in soil and fractured rock.

    Science.gov (United States)

    Jones, Brendon R; Brouwers, Luke B; Van Tonder, Warren D; Dippenaar, Matthys A

    2017-05-01

    The vadose zone typically comprises soil underlain by fractured rock. Often, surface water and groundwater parameters are readily available, but variably saturated flow through soil and rock are oversimplified or estimated as input for hydrological models. In this paper, a series of geotechnical centrifuge experiments are conducted to contribute to the knowledge gaps in: (i) variably saturated flow and dispersion in soil and (ii) variably saturated flow in discrete vertical and horizontal fractures. Findings from the research show that the hydraulic gradient, and not the hydraulic conductivity, is scaled for seepage flow in the geotechnical centrifuge. Furthermore, geotechnical centrifuge modelling has been proven as a viable experimental tool for the modelling of hydrodynamic dispersion as well as the replication of similar flow mechanisms for unsaturated fracture flow, as previously observed in literature. Despite the imminent challenges of modelling variable saturation in the vadose zone, the geotechnical centrifuge offers a powerful experimental tool to physically model and observe variably saturated flow. This can be used to give valuable insight into mechanisms associated with solid-fluid interaction problems under these conditions. Findings from future research can be used to validate current numerical modelling techniques and address the subsequent influence on aquifer recharge and vulnerability, contaminant transport, waste disposal, dam construction, slope stability and seepage into subsurface excavations.

  11. Importance analysis for models with correlated variables and its sparse grid solution

    International Nuclear Information System (INIS)

    Li, Luyi; Lu, Zhenzhou

    2013-01-01

    For structural models involving correlated input variables, a novel interpretation for variance-based importance measures is proposed based on the contribution of the correlated input variables to the variance of the model output. After the novel interpretation of the variance-based importance measures is compared with the existing ones, two solutions of the variance-based importance measures of the correlated input variables are built on the sparse grid numerical integration (SGI): double-loop nested sparse grid integration (DSGI) method and single loop sparse grid integration (SSGI) method. The DSGI method solves the importance measure by decreasing the dimensionality of the input variables procedurally, while SSGI method performs importance analysis through extending the dimensionality of the inputs. Both of them can make full use of the advantages of the SGI, and are well tailored for different situations. By analyzing the results of several numerical and engineering examples, it is found that the novel proposed interpretation about the importance measures of the correlated input variables is reasonable, and the proposed methods for solving importance measures are efficient and accurate. -- Highlights: •The contribution of correlated variables to the variance of the output is analyzed. •A novel interpretation for variance-based indices of correlated variables is proposed. •Two solutions for variance-based importance measures of correlated variables are built

  12. Multiple Imputation of Predictor Variables Using Generalized Additive Models

    NARCIS (Netherlands)

    de Jong, Roel; van Buuren, Stef; Spiess, Martin

    2016-01-01

    The sensitivity of multiple imputation methods to deviations from their distributional assumptions is investigated using simulations, where the parameters of scientific interest are the coefficients of a linear regression model, and values in predictor variables are missing at random. The

  13. An agent-based model of cellular dynamics and circadian variability in human endotoxemia.

    Directory of Open Access Journals (Sweden)

    Tung T Nguyen

    Full Text Available As cellular variability and circadian rhythmicity play critical roles in immune and inflammatory responses, we present in this study an agent-based model of human endotoxemia to examine the interplay between circadian controls, cellular variability and stochastic dynamics of inflammatory cytokines. The model is qualitatively validated by its ability to reproduce circadian dynamics of inflammatory mediators and critical inflammatory responses after endotoxin administration in vivo. Novel computational concepts are proposed to characterize the cellular variability and synchronization of inflammatory cytokines in a population of heterogeneous leukocytes. Our results suggest that there is a decrease in cell-to-cell variability of inflammatory cytokines while their synchronization is increased after endotoxin challenge. Model parameters that are responsible for IκB production stimulated by NFκB activation and for the production of anti-inflammatory cytokines have large impacts on system behaviors. Additionally, examining time-dependent systemic responses revealed that the system is least vulnerable to endotoxin in the early morning and most vulnerable around midnight. Although much remains to be explored, proposed computational concepts and the model we have pioneered will provide important insights for future investigations and extensions, especially for single-cell studies to discover how cellular variability contributes to clinical implications.

  14. A new approach for modelling variability in residential construction projects

    Directory of Open Access Journals (Sweden)

    Mehrdad Arashpour

    2013-06-01

    Full Text Available The construction industry is plagued by long cycle times caused by variability in the supply chain. Variations or undesirable situations are the result of factors such as non-standard practices, work site accidents, inclement weather conditions and faults in design. This paper uses a new approach for modelling variability in construction by linking relative variability indicators to processes. Mass homebuilding sector was chosen as the scope of the analysis because data is readily available. Numerous simulation experiments were designed by varying size of capacity buffers in front of trade contractors, availability of trade contractors, and level of variability in homebuilding processes. The measurements were shown to lead to an accurate determination of relationships between these factors and production parameters. The variability indicator was found to dramatically affect the tangible performance measures such as home completion rates. This study provides for future analysis of the production homebuilding sector, which may lead to improvements in performance and a faster product delivery to homebuyers.

  15. A new approach for modelling variability in residential construction projects

    Directory of Open Access Journals (Sweden)

    Mehrdad Arashpour

    2013-06-01

    Full Text Available The construction industry is plagued by long cycle times caused by variability in the supply chain. Variations or undesirable situations are the result of factors such as non-standard practices, work site accidents, inclement weather conditions and faults in design. This paper uses a new approach for modelling variability in construction by linking relative variability indicators to processes. Mass homebuilding sector was chosen as the scope of the analysis because data is readily available. Numerous simulation experiments were designed by varying size of capacity buffers in front of trade contractors, availability of trade contractors, and level of variability in homebuilding processes. The measurements were shown to lead to an accurate determination of relationships between these factors and production parameters. The variability indicator was found to dramatically affect the tangible performance measures such as home completion rates. This study provides for future analysis of the production homebuilding sector, which may lead to improvements in performance and a faster product delivery to homebuyers. 

  16. Modelling Inter-relationships among water, governance, human development variables in developing countries with Bayesian networks.

    Science.gov (United States)

    Dondeynaz, C.; Lopez-Puga, J.; Carmona-Moreno, C.

    2012-04-01

    Improving Water and Sanitation Services (WSS), being a complex and interdisciplinary issue, passes through collaboration and coordination of different sectors (environment, health, economic activities, governance, and international cooperation). This inter-dependency has been recognised with the adoption of the "Integrated Water Resources Management" principles that push for the integration of these various dimensions involved in WSS delivery to ensure an efficient and sustainable management. The understanding of these interrelations appears as crucial for decision makers in the water sector in particular in developing countries where WSS still represent an important leverage for livelihood improvement. In this framework, the Joint Research Centre of the European Commission has developed a coherent database (WatSan4Dev database) containing 29 indicators from environmental, socio-economic, governance and financial aid flows data focusing on developing countries (Celine et al, 2011 under publication). The aim of this work is to model the WatSan4Dev dataset using probabilistic models to identify the key variables influencing or being influenced by the water supply and sanitation access levels. Bayesian Network Models are suitable to map the conditional dependencies between variables and also allows ordering variables by level of influence on the dependent variable. Separated models have been built for water supply and for sanitation because of different behaviour. The models are validated if complying with statistical criteria but either with scientific knowledge and literature. A two steps approach has been adopted to build the structure of the model; Bayesian network is first built for each thematic cluster of variables (e.g governance, agricultural pressure, or human development) keeping a detailed level for interpretation later one. A global model is then built based on significant indicators of each cluster being previously modelled. The structure of the

  17. Application of a user-friendly comprehensive circulatory model for estimation of hemodynamic and ventricular variables

    NARCIS (Netherlands)

    Ferrari, G.; Kozarski, M.; Gu, Y. J.; De Lazzari, C.; Di Molfetta, A.; Palko, K. J.; Zielinski, K.; Gorczynska, K.; Darowski, M.; Rakhorst, G.

    2008-01-01

    Purpose: Application of a comprehensive, user-friendly, digital computer circulatory model to estimate hemodynamic and ventricular variables. Methods: The closed-loop lumped parameter circulatory model represents the circulation at the level of large vessels. A variable elastance model reproduces

  18. 4. Valorizations of Theoretical Models of Giftedness and Talent in Defining of Artistic Talent

    OpenAIRE

    Anghel Ionica Ona

    2016-01-01

    Artistic talent has been defined in various contexts and registers a variety of meanings, more or less operational. From the perspective of pedagogical intervention, it is imperative understanding artistic talent trough the theoretical models of giftedness and talent. So, the aim of the study is to realize a review of the most popular of the theoretical models of giftedness and talent, with identification of the place of artistic talent and the new meanings that artistic talent has in each on...

  19. Optimization of artificial neural network models through genetic algorithms for surface ozone concentration forecasting.

    Science.gov (United States)

    Pires, J C M; Gonçalves, B; Azevedo, F G; Carneiro, A P; Rego, N; Assembleia, A J B; Lima, J F B; Silva, P A; Alves, C; Martins, F G

    2012-09-01

    This study proposes three methodologies to define artificial neural network models through genetic algorithms (GAs) to predict the next-day hourly average surface ozone (O(3)) concentrations. GAs were applied to define the activation function in hidden layer and the number of hidden neurons. Two of the methodologies define threshold models, which assume that the behaviour of the dependent variable (O(3) concentrations) changes when it enters in a different regime (two and four regimes were considered in this study). The change from one regime to another depends on a specific value (threshold value) of an explanatory variable (threshold variable), which is also defined by GAs. The predictor variables were the hourly average concentrations of carbon monoxide (CO), nitrogen oxide, nitrogen dioxide (NO(2)), and O(3) (recorded in the previous day at an urban site with traffic influence) and also meteorological data (hourly averages of temperature, solar radiation, relative humidity and wind speed). The study was performed for the period from May to August 2004. Several models were achieved and only the best model of each methodology was analysed. In threshold models, the variables selected by GAs to define the O(3) regimes were temperature, CO and NO(2) concentrations, due to their importance in O(3) chemistry in an urban atmosphere. In the prediction of O(3) concentrations, the threshold model that considers two regimes was the one that fitted the data most efficiently.

  20. Coupled variable selection for regression modeling of complex treatment patterns in a clinical cancer registry.

    Science.gov (United States)

    Schmidtmann, I; Elsäßer, A; Weinmann, A; Binder, H

    2014-12-30

    For determining a manageable set of covariates potentially influential with respect to a time-to-event endpoint, Cox proportional hazards models can be combined with variable selection techniques, such as stepwise forward selection or backward elimination based on p-values, or regularized regression techniques such as component-wise boosting. Cox regression models have also been adapted for dealing with more complex event patterns, for example, for competing risks settings with separate, cause-specific hazard models for each event type, or for determining the prognostic effect pattern of a variable over different landmark times, with one conditional survival model for each landmark. Motivated by a clinical cancer registry application, where complex event patterns have to be dealt with and variable selection is needed at the same time, we propose a general approach for linking variable selection between several Cox models. Specifically, we combine score statistics for each covariate across models by Fisher's method as a basis for variable selection. This principle is implemented for a stepwise forward selection approach as well as for a regularized regression technique. In an application to data from hepatocellular carcinoma patients, the coupled stepwise approach is seen to facilitate joint interpretation of the different cause-specific Cox models. In conditional survival models at landmark times, which address updates of prediction as time progresses and both treatment and other potential explanatory variables may change, the coupled regularized regression approach identifies potentially important, stably selected covariates together with their effect time pattern, despite having only a small number of events. These results highlight the promise of the proposed approach for coupling variable selection between Cox models, which is particularly relevant for modeling for clinical cancer registries with their complex event patterns. Copyright © 2014 John Wiley & Sons

  1. Remote sensing of the Canadian Arctic: Modelling biophysical variables

    Science.gov (United States)

    Liu, Nanfeng

    It is anticipated that Arctic vegetation will respond in a variety of ways to altered temperature and precipitation patterns expected with climate change, including changes in phenology, productivity, biomass, cover and net ecosystem exchange. Remote sensing provides data and data processing methodologies for monitoring and assessing Arctic vegetation over large areas. The goal of this research was to explore the potential of hyperspectral and high spatial resolution multispectral remote sensing data for modelling two important Arctic biophysical variables: Percent Vegetation Cover (PVC) and the fraction of Absorbed Photosynthetically Active Radiation (fAPAR). A series of field experiments were conducted to collect PVC and fAPAR at three Canadian Arctic sites: (1) Sabine Peninsula, Melville Island, NU; (2) Cape Bounty Arctic Watershed Observatory (CBAWO), Melville Island, NU; and (3) Apex River Watershed (ARW), Baffin Island, NU. Linear relationships between biophysical variables and Vegetation Indices (VIs) were examined at different spatial scales using field spectra (for the Sabine Peninsula site) and high spatial resolution satellite data (for the CBAWO and ARW sites). At the Sabine Peninsula site, hyperspectral VIs exhibited a better performance for modelling PVC than multispectral VIs due to their capacity for sampling fine spectral features. The optimal hyperspectral bands were located at important spectral features observed in Arctic vegetation spectra, including leaf pigment absorption in the red wavelengths and at the red-edge, leaf water absorption in the near infrared, and leaf cellulose and lignin absorption in the shortwave infrared. At the CBAWO and ARW sites, field PVC and fAPAR exhibited strong correlations (R2 > 0.70) with the NDVI (Normalized Difference Vegetation Index) derived from high-resolution WorldView-2 data. Similarly, high spatial resolution satellite-derived fAPAR was correlated to MODIS fAPAR (R2 = 0.68), with a systematic

  2. Error-in-variables models in calibration

    Science.gov (United States)

    Lira, I.; Grientschnig, D.

    2017-12-01

    In many calibration operations, the stimuli applied to the measuring system or instrument under test are derived from measurement standards whose values may be considered to be perfectly known. In that case, it is assumed that calibration uncertainty arises solely from inexact measurement of the responses, from imperfect control of the calibration process and from the possible inaccuracy of the calibration model. However, the premise that the stimuli are completely known is never strictly fulfilled and in some instances it may be grossly inadequate. Then, error-in-variables (EIV) regression models have to be employed. In metrology, these models have been approached mostly from the frequentist perspective. In contrast, not much guidance is available on their Bayesian analysis. In this paper, we first present a brief summary of the conventional statistical techniques that have been developed to deal with EIV models in calibration. We then proceed to discuss the alternative Bayesian framework under some simplifying assumptions. Through a detailed example about the calibration of an instrument for measuring flow rates, we provide advice on how the user of the calibration function should employ the latter framework for inferring the stimulus acting on the calibrated device when, in use, a certain response is measured.

  3. Internal and External Validation of a multivariable Model to Define Hospital-Acquired Pneumonia After Esophagectomy

    NARCIS (Netherlands)

    Weijs, Teus J; Seesing, Maarten F J; van Rossum, Peter S N; Koëter, Marijn; van der Sluis, Pieter C; Luyer, Misha D P; Ruurda, Jelle P; Nieuwenhuijzen, Grard A P; van Hillegersberg, Richard

    BACKGROUND: Pneumonia is an important complication following esophagectomy; however, a wide range of pneumonia incidence is reported. The lack of one generally accepted definition prevents valid inter-study comparisons. We aimed to simplify and validate an existing scoring model to define pneumonia

  4. Non-linear frequency response of non-isothermal adsorption controlled by micropore diffusion with variable diffusivity

    Directory of Open Access Journals (Sweden)

    MENKA PETKOVSKA

    2000-12-01

    Full Text Available The concept of higher order frequency response functions (FRFs is used for the analysis of non-linear adsorption kinetics on a particle scale, for the case of non-isothermal micropore diffusion with variable diffusivity. Six series of FRFs are defined for the general non-isothermal case. A non-linerar mathematical model is postulated and the first and second order FRFs derived and simulated. A variable diffusivity influences the shapes of the second order FRFs relating the sorbate concentration in the solid phase and t he gas pressure significantly, but they still keep their characteristics which can be used for discrimination of this from other kinetic mechanisms. It is also shown that first and second order particle FRFs offter sufficient information for an easy and fast estimation of all model parameters, including those defining the system non-linearity.

  5. Oscillating shells: A model for a variable cosmic object

    OpenAIRE

    Nunez, Dario

    1997-01-01

    A model for a possible variable cosmic object is presented. The model consists of a massive shell surrounding a compact object. The gravitational and self-gravitational forces tend to collapse the shell, but the internal tangential stresses oppose the collapse. The combined action of the two types of forces is studied and several cases are presented. In particular, we investigate the spherically symmetric case in which the shell oscillates radially around a central compact object.

  6. Variable impact on mortality of AIDS-defining events diagnosed during combination antiretroviral therapy

    DEFF Research Database (Denmark)

    Mocroft, Amanda; Sterne, Jonathan A C; Egger, Matthias

    2009-01-01

    BACKGROUND: The extent to which mortality differs following individual acquired immunodeficiency syndrome (AIDS)-defining events (ADEs) has not been assessed among patients initiating combination antiretroviral therapy. METHODS: We analyzed data from 31,620 patients with no prior ADEs who started...... studies, and patient management....

  7. Quantifying measurement uncertainty and spatial variability in the context of model evaluation

    Science.gov (United States)

    Choukulkar, A.; Brewer, A.; Pichugina, Y. L.; Bonin, T.; Banta, R. M.; Sandberg, S.; Weickmann, A. M.; Djalalova, I.; McCaffrey, K.; Bianco, L.; Wilczak, J. M.; Newman, J. F.; Draxl, C.; Lundquist, J. K.; Wharton, S.; Olson, J.; Kenyon, J.; Marquis, M.

    2017-12-01

    In an effort to improve wind forecasts for the wind energy sector, the Department of Energy and the NOAA funded the second Wind Forecast Improvement Project (WFIP2). As part of the WFIP2 field campaign, a large suite of in-situ and remote sensing instrumentation was deployed to the Columbia River Gorge in Oregon and Washington from October 2015 - March 2017. The array of instrumentation deployed included 915-MHz wind profiling radars, sodars, wind- profiling lidars, and scanning lidars. The role of these instruments was to provide wind measurements at high spatial and temporal resolution for model evaluation and improvement of model physics. To properly determine model errors, the uncertainties in instrument-model comparisons need to be quantified accurately. These uncertainties arise from several factors such as measurement uncertainty, spatial variability, and interpolation of model output to instrument locations, to name a few. In this presentation, we will introduce a formalism to quantify measurement uncertainty and spatial variability. The accuracy of this formalism will be tested using existing datasets such as the eXperimental Planetary boundary layer Instrumentation Assessment (XPIA) campaign. Finally, the uncertainties in wind measurement and the spatial variability estimates from the WFIP2 field campaign will be discussed to understand the challenges involved in model evaluation.

  8. Degree of multicollinearity and variables involved in linear dependence in additive-dominant models

    Directory of Open Access Journals (Sweden)

    Juliana Petrini

    2012-12-01

    Full Text Available The objective of this work was to assess the degree of multicollinearity and to identify the variables involved in linear dependence relations in additive-dominant models. Data of birth weight (n=141,567, yearling weight (n=58,124, and scrotal circumference (n=20,371 of Montana Tropical composite cattle were used. Diagnosis of multicollinearity was based on the variance inflation factor (VIF and on the evaluation of the condition indexes and eigenvalues from the correlation matrix among explanatory variables. The first model studied (RM included the fixed effect of dam age class at calving and the covariates associated to the direct and maternal additive and non-additive effects. The second model (R included all the effects of the RM model except the maternal additive effects. Multicollinearity was detected in both models for all traits considered, with VIF values of 1.03 - 70.20 for RM and 1.03 - 60.70 for R. Collinearity increased with the increase of variables in the model and the decrease in the number of observations, and it was classified as weak, with condition index values between 10.00 and 26.77. In general, the variables associated with additive and non-additive effects were involved in multicollinearity, partially due to the natural connection between these covariables as fractions of the biological types in breed composition.

  9. Development and evaluation of a stochastic daily rainfall model with long-term variability

    Science.gov (United States)

    Kamal Chowdhury, A. F. M.; Lockart, Natalie; Willgoose, Garry; Kuczera, George; Kiem, Anthony S.; Parana Manage, Nadeeka

    2017-12-01

    The primary objective of this study is to develop a stochastic rainfall generation model that can match not only the short resolution (daily) variability but also the longer resolution (monthly to multiyear) variability of observed rainfall. This study has developed a Markov chain (MC) model, which uses a two-state MC process with two parameters (wet-to-wet and dry-to-dry transition probabilities) to simulate rainfall occurrence and a gamma distribution with two parameters (mean and standard deviation of wet day rainfall) to simulate wet day rainfall depths. Starting with the traditional MC-gamma model with deterministic parameters, this study has developed and assessed four other variants of the MC-gamma model with different parameterisations. The key finding is that if the parameters of the gamma distribution are randomly sampled each year from fitted distributions rather than fixed parameters with time, the variability of rainfall depths at both short and longer temporal resolutions can be preserved, while the variability of wet periods (i.e. number of wet days and mean length of wet spell) can be preserved by decadally varied MC parameters. This is a straightforward enhancement to the traditional simplest MC model and is both objective and parsimonious.

  10. Geochemical Modeling Of F Area Seepage Basin Composition And Variability

    International Nuclear Information System (INIS)

    Millings, M.; Denham, M.; Looney, B.

    2012-01-01

    From the 1950s through 1989, the F Area Seepage Basins at the Savannah River Site (SRS) received low level radioactive wastes resulting from processing nuclear materials. Discharges of process wastes to the F Area Seepage Basins followed by subsequent mixing processes within the basins and eventual infiltration into the subsurface resulted in contamination of the underlying vadose zone and downgradient groundwater. For simulating contaminant behavior and subsurface transport, a quantitative understanding of the interrelated discharge-mixing-infiltration system along with the resulting chemistry of fluids entering the subsurface is needed. An example of this need emerged as the F Area Seepage Basins was selected as a key case study demonstration site for the Advanced Simulation Capability for Environmental Management (ASCEM) Program. This modeling evaluation explored the importance of the wide variability in bulk wastewater chemistry as it propagated through the basins. The results are intended to generally improve and refine the conceptualization of infiltration of chemical wastes from seepage basins receiving variable waste streams and to specifically support the ASCEM case study model for the F Area Seepage Basins. Specific goals of this work included: (1) develop a technically-based 'charge-balanced' nominal source term chemistry for water infiltrating into the subsurface during basin operations, (2) estimate the nature of short term and long term variability in infiltrating water to support scenario development for uncertainty quantification (i.e., UQ analysis), (3) identify key geochemical factors that control overall basin water chemistry and the projected variability/stability, and (4) link wastewater chemistry to the subsurface based on monitoring well data. Results from this study provide data and understanding that can be used in further modeling efforts of the F Area groundwater plume. As identified in this study, key geochemical factors affecting basin

  11. Local-scale models reveal ecological niche variability in amphibian and reptile communities from two contrasting biogeographic regions

    Directory of Open Access Journals (Sweden)

    Alberto Muñoz

    2016-10-01

    Full Text Available Ecological Niche Models (ENMs are widely used to describe how environmental factors influence species distribution. Modelling at a local scale, compared to a large scale within a high environmental gradient, can improve our understanding of ecological species niches. The main goal of this study is to assess and compare the contribution of environmental variables to amphibian and reptile ENMs in two Spanish national parks located in contrasting biogeographic regions, i.e., the Mediterranean and the Atlantic area. The ENMs were built with maximum entropy modelling using 11 environmental variables in each territory. The contributions of these variables to the models were analysed and classified using various statistical procedures (Mann–Whitney U tests, Principal Components Analysis and General Linear Models. Distance to the hydrological network was consistently the most relevant variable for both parks and taxonomic classes. Topographic variables (i.e., slope and altitude were the second most predictive variables, followed by climatic variables. Differences in variable contribution were observed between parks and taxonomic classes. Variables related to water availability had the larger contribution to the models in the Mediterranean park, while topography variables were decisive in the Atlantic park. Specific response curves to environmental variables were in accordance with the biogeographic affinity of species (Mediterranean and non-Mediterranean species and taxonomy (amphibians and reptiles. Interestingly, these results were observed for species located in both parks, particularly those situated at their range limits. Our findings show that ecological niche models built at local scale reveal differences in habitat preferences within a wide environmental gradient. Therefore, modelling at local scales rather than assuming large-scale models could be preferable for the establishment of conservation strategies for herptile species in natural

  12. Local-scale models reveal ecological niche variability in amphibian and reptile communities from two contrasting biogeographic regions

    Science.gov (United States)

    Santos, Xavier; Felicísimo, Ángel M.

    2016-01-01

    Ecological Niche Models (ENMs) are widely used to describe how environmental factors influence species distribution. Modelling at a local scale, compared to a large scale within a high environmental gradient, can improve our understanding of ecological species niches. The main goal of this study is to assess and compare the contribution of environmental variables to amphibian and reptile ENMs in two Spanish national parks located in contrasting biogeographic regions, i.e., the Mediterranean and the Atlantic area. The ENMs were built with maximum entropy modelling using 11 environmental variables in each territory. The contributions of these variables to the models were analysed and classified using various statistical procedures (Mann–Whitney U tests, Principal Components Analysis and General Linear Models). Distance to the hydrological network was consistently the most relevant variable for both parks and taxonomic classes. Topographic variables (i.e., slope and altitude) were the second most predictive variables, followed by climatic variables. Differences in variable contribution were observed between parks and taxonomic classes. Variables related to water availability had the larger contribution to the models in the Mediterranean park, while topography variables were decisive in the Atlantic park. Specific response curves to environmental variables were in accordance with the biogeographic affinity of species (Mediterranean and non-Mediterranean species) and taxonomy (amphibians and reptiles). Interestingly, these results were observed for species located in both parks, particularly those situated at their range limits. Our findings show that ecological niche models built at local scale reveal differences in habitat preferences within a wide environmental gradient. Therefore, modelling at local scales rather than assuming large-scale models could be preferable for the establishment of conservation strategies for herptile species in natural parks. PMID

  13. Quantifying intrinsic and extrinsic variability in stochastic gene expression models.

    Science.gov (United States)

    Singh, Abhyudai; Soltani, Mohammad

    2013-01-01

    Genetically identical cell populations exhibit considerable intercellular variation in the level of a given protein or mRNA. Both intrinsic and extrinsic sources of noise drive this variability in gene expression. More specifically, extrinsic noise is the expression variability that arises from cell-to-cell differences in cell-specific factors such as enzyme levels, cell size and cell cycle stage. In contrast, intrinsic noise is the expression variability that is not accounted for by extrinsic noise, and typically arises from the inherent stochastic nature of biochemical processes. Two-color reporter experiments are employed to decompose expression variability into its intrinsic and extrinsic noise components. Analytical formulas for intrinsic and extrinsic noise are derived for a class of stochastic gene expression models, where variations in cell-specific factors cause fluctuations in model parameters, in particular, transcription and/or translation rate fluctuations. Assuming mRNA production occurs in random bursts, transcription rate is represented by either the burst frequency (how often the bursts occur) or the burst size (number of mRNAs produced in each burst). Our analysis shows that fluctuations in the transcription burst frequency enhance extrinsic noise but do not affect the intrinsic noise. On the contrary, fluctuations in the transcription burst size or mRNA translation rate dramatically increase both intrinsic and extrinsic noise components. Interestingly, simultaneous fluctuations in transcription and translation rates arising from randomness in ATP abundance can decrease intrinsic noise measured in a two-color reporter assay. Finally, we discuss how these formulas can be combined with single-cell gene expression data from two-color reporter experiments for estimating model parameters.

  14. Stochastic modeling of the Fermi/LAT γ-ray blazar variability

    Energy Technology Data Exchange (ETDEWEB)

    Sobolewska, M. A.; Siemiginowska, A. [Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138 (United States); Kelly, B. C. [Department of Physics, Broida Hall, University of California, Santa Barbara, CA 93107 (United States); Nalewajko, K., E-mail: malgosia@camk.edu.pl [JILA, University of Colorado and National Institute of Standards and Technology, 440 UCB, Boulder, CO 80309 (United States)

    2014-05-10

    We study the γ-ray variability of 13 blazars observed with the Fermi/Large Area Telescope (LAT). These blazars have the most complete light curves collected during the first four years of the Fermi sky survey. We model them with the Ornstein-Uhlenbeck (OU) process or a mixture of the OU processes. The OU process has power spectral density (PSD) proportional to 1/f {sup α} with α changing at a characteristic timescale, τ{sub 0}, from 0 (τ >> τ{sub 0}) to 2 (τ << τ{sub 0}). The PSD of the mixed OU process has two characteristic timescales and an additional intermediate region with 0 < α < 2. We show that the OU model provides a good description of the Fermi/LAT light curves of three blazars in our sample. For the first time, we constrain a characteristic γ-ray timescale of variability in two BL Lac sources, 3C 66A and PKS 2155-304 (τ{sub 0} ≅ 25 days and τ{sub 0} ≅ 43 days, respectively, in the observer's frame), which are longer than the soft X-ray timescales detected in blazars and Seyfert galaxies. We find that the mixed OU process approximates the light curves of the remaining 10 blazars better than the OU process. We derive limits on their long and short characteristic timescales, and infer that their Fermi/LAT PSD resemble power-law functions. We constrain the PSD slopes for all but one source in the sample. We find hints for sub-hour Fermi/LAT variability in four flat spectrum radio quasars. We discuss the implications of our results for theoretical models of blazar variability.

  15. A MODEL FOR (QUASI-)PERIODIC MULTIWAVELENGTH PHOTOMETRIC VARIABILITY IN YOUNG STELLAR OBJECTS

    Energy Technology Data Exchange (ETDEWEB)

    Kesseli, Aurora Y. [Boston University, 725 Commonwealth Ave, Boston, MA 02215 (United States); Petkova, Maya A.; Wood, Kenneth; Gregory, Scott G. [SUPA, School of Physics and Astronomy, University of St Andrews, North Haugh, St Andrews, Fife, KY16 9AD (United Kingdom); Whitney, Barbara A. [Department of Astronomy, University of Wisconsin-Madison, 475 N. Charter St, Madison, WI 53706 (United States); Hillenbrand, L. A. [Astronomy Department, California Institute of Technology, Pasadena, CA 91125 (United States); Stauffer, J. R.; Morales-Calderon, M.; Rebull, L. [Spitzer Science Center, California Institute of Technology, CA 91125 (United States); Alencar, S. H. P., E-mail: aurorak@bu.com [Departamento de Física—ICEx—UFMG, Av. Antônio Carlos, 6627, 30270-901, Belo Horizonte, MG (Brazil)

    2016-09-01

    We present radiation transfer models of rotating young stellar objects (YSOs) with hot spots in their atmospheres, inner disk warps, and other three-dimensional effects in the nearby circumstellar environment. Our models are based on the geometry expected from magneto-accretion theory, where material moving inward in the disk flows along magnetic field lines to the star and creates stellar hot spots upon impact. Due to rotation of the star and magnetosphere, the disk is variably illuminated. We compare our model light curves to data from the Spitzer YSOVAR project to determine if these processes can explain the variability observed at optical and mid-infrared wavelengths in young stars. We focus on those variables exhibiting “dipper” behavior that may be periodic, quasi-periodic, or aperiodic. We find that the stellar hot-spot size and temperature affects the optical and near-infrared light curves, while the shape and vertical extent of the inner disk warp affects the mid-IR light curve variations. Clumpy disk distributions with non-uniform fractal density structure produce more stochastic light curves. We conclude that magneto-accretion theory is consistent with certain aspects of the multiwavelength photometric variability exhibited by low-mass YSOs. More detailed modeling of individual sources can be used to better determine the stellar hot-spot and inner disk geometries of particular sources.

  16. Micro-macro multilevel latent class models with multiple discrete individual-level variables

    NARCIS (Netherlands)

    Bennink, M.; Croon, M.A.; Kroon, B.; Vermunt, J.K.

    2016-01-01

    An existing micro-macro method for a single individual-level variable is extended to the multivariate situation by presenting two multilevel latent class models in which multiple discrete individual-level variables are used to explain a group-level outcome. As in the univariate case, the

  17. Theoretical investigations of the new Cokriging method for variable-fidelity surrogate modeling

    DEFF Research Database (Denmark)

    Zimmermann, Ralf; Bertram, Anna

    2018-01-01

    Cokriging is a variable-fidelity surrogate modeling technique which emulates a target process based on the spatial correlation of sampled data of different levels of fidelity. In this work, we address two theoretical questions associated with the so-called new Cokriging method for variable fidelity...

  18. Loss given default models incorporating macroeconomic variables for credit cards

    OpenAIRE

    Crook, J.; Bellotti, T.

    2012-01-01

    Based on UK data for major retail credit cards, we build several models of Loss Given Default based on account level data, including Tobit, a decision tree model, a Beta and fractional logit transformation. We find that Ordinary Least Squares models with macroeconomic variables perform best for forecasting Loss Given Default at the account and portfolio levels on independent hold-out data sets. The inclusion of macroeconomic conditions in the model is important, since it provides a means to m...

  19. Quantifying variability in earthquake rupture models using multidimensional scaling: application to the 2011 Tohoku earthquake

    KAUST Repository

    Razafindrakoto, Hoby

    2015-04-22

    Finite-fault earthquake source inversion is an ill-posed inverse problem leading to non-unique solutions. In addition, various fault parametrizations and input data may have been used by different researchers for the same earthquake. Such variability leads to large intra-event variability in the inferred rupture models. One way to understand this problem is to develop robust metrics to quantify model variability. We propose a Multi Dimensional Scaling (MDS) approach to compare rupture models quantitatively. We consider normalized squared and grey-scale metrics that reflect the variability in the location, intensity and geometry of the source parameters. We test the approach on two-dimensional random fields generated using a von Kármán autocorrelation function and varying its spectral parameters. The spread of points in the MDS solution indicates different levels of model variability. We observe that the normalized squared metric is insensitive to variability of spectral parameters, whereas the grey-scale metric is sensitive to small-scale changes in geometry. From this benchmark, we formulate a similarity scale to rank the rupture models. As case studies, we examine inverted models from the Source Inversion Validation (SIV) exercise and published models of the 2011 Mw 9.0 Tohoku earthquake, allowing us to test our approach for a case with a known reference model and one with an unknown true solution. The normalized squared and grey-scale metrics are respectively sensitive to the overall intensity and the extension of the three classes of slip (very large, large, and low). Additionally, we observe that a three-dimensional MDS configuration is preferable for models with large variability. We also find that the models for the Tohoku earthquake derived from tsunami data and their corresponding predictions cluster with a systematic deviation from other models. We demonstrate the stability of the MDS point-cloud using a number of realizations and jackknife tests, for

  20. Quantifying variability in earthquake rupture models using multidimensional scaling: application to the 2011 Tohoku earthquake

    KAUST Repository

    Razafindrakoto, Hoby; Mai, Paul Martin; Genton, Marc G.; Zhang, Ling; Thingbaijam, Kiran Kumar

    2015-01-01

    Finite-fault earthquake source inversion is an ill-posed inverse problem leading to non-unique solutions. In addition, various fault parametrizations and input data may have been used by different researchers for the same earthquake. Such variability leads to large intra-event variability in the inferred rupture models. One way to understand this problem is to develop robust metrics to quantify model variability. We propose a Multi Dimensional Scaling (MDS) approach to compare rupture models quantitatively. We consider normalized squared and grey-scale metrics that reflect the variability in the location, intensity and geometry of the source parameters. We test the approach on two-dimensional random fields generated using a von Kármán autocorrelation function and varying its spectral parameters. The spread of points in the MDS solution indicates different levels of model variability. We observe that the normalized squared metric is insensitive to variability of spectral parameters, whereas the grey-scale metric is sensitive to small-scale changes in geometry. From this benchmark, we formulate a similarity scale to rank the rupture models. As case studies, we examine inverted models from the Source Inversion Validation (SIV) exercise and published models of the 2011 Mw 9.0 Tohoku earthquake, allowing us to test our approach for a case with a known reference model and one with an unknown true solution. The normalized squared and grey-scale metrics are respectively sensitive to the overall intensity and the extension of the three classes of slip (very large, large, and low). Additionally, we observe that a three-dimensional MDS configuration is preferable for models with large variability. We also find that the models for the Tohoku earthquake derived from tsunami data and their corresponding predictions cluster with a systematic deviation from other models. We demonstrate the stability of the MDS point-cloud using a number of realizations and jackknife tests, for

  1. Generalized Density-Corrected Model for Gas Diffusivity in Variably Saturated Soils

    DEFF Research Database (Denmark)

    Chamindu, Deepagoda; Møldrup, Per; Schjønning, Per

    2011-01-01

    models. The GDC model was further extended to describe two-region (bimodal) soils and could describe and predict Dp/Do well for both different soil aggregate size fractions and variably compacted volcanic ash soils. A possible use of the new GDC model is engineering applications such as the design...... of highly compacted landfill site caps....

  2. (Super Variable Costing-Throughput Costing)

    OpenAIRE

    Çakıcı, Cemal

    2006-01-01

    (Super Variable Costing-Throughput Costing) The aim of this study is to explain the super-variable costing method which is a new subject in cost and management accounting and to show it’s working practicly.Shortly, super-variable costing can be defined as a costing method which is use only direct material costs in calculate of product costs and treats all costs except these (direct labor and overhead) as periad costs or operating costs.By using super-variable costing method, product costs ar...

  3. Effects of environmental variables on invasive amphibian activity: Using model selection on quantiles for counts

    Science.gov (United States)

    Muller, Benjamin J.; Cade, Brian S.; Schwarzkoph, Lin

    2018-01-01

    Many different factors influence animal activity. Often, the value of an environmental variable may influence significantly the upper or lower tails of the activity distribution. For describing relationships with heterogeneous boundaries, quantile regressions predict a quantile of the conditional distribution of the dependent variable. A quantile count model extends linear quantile regression methods to discrete response variables, and is useful if activity is quantified by trapping, where there may be many tied (equal) values in the activity distribution, over a small range of discrete values. Additionally, different environmental variables in combination may have synergistic or antagonistic effects on activity, so examining their effects together, in a modeling framework, is a useful approach. Thus, model selection on quantile counts can be used to determine the relative importance of different variables in determining activity, across the entire distribution of capture results. We conducted model selection on quantile count models to describe the factors affecting activity (numbers of captures) of cane toads (Rhinella marina) in response to several environmental variables (humidity, temperature, rainfall, wind speed, and moon luminosity) over eleven months of trapping. Environmental effects on activity are understudied in this pest animal. In the dry season, model selection on quantile count models suggested that rainfall positively affected activity, especially near the lower tails of the activity distribution. In the wet season, wind speed limited activity near the maximum of the distribution, while minimum activity increased with minimum temperature. This statistical methodology allowed us to explore, in depth, how environmental factors influenced activity across the entire distribution, and is applicable to any survey or trapping regime, in which environmental variables affect activity.

  4. Integrating Ecosystem Carbon Dynamics into State-and-Transition Simulation Models of Land Use/Land Cover Change

    Science.gov (United States)

    Sleeter, B. M.; Daniel, C.; Frid, L.; Fortin, M. J.

    2016-12-01

    State-and-transition simulation models (STSMs) provide a general approach for incorporating uncertainty into forecasts of landscape change. Using a Monte Carlo approach, STSMs generate spatially-explicit projections of the state of a landscape based upon probabilistic transitions defined between states. While STSMs are based on the basic principles of Markov chains, they have additional properties that make them applicable to a wide range of questions and types of landscapes. A current limitation of STSMs is that they are only able to track the fate of discrete state variables, such as land use/land cover (LULC) classes. There are some landscape modelling questions, however, for which continuous state variables - for example carbon biomass - are also required. Here we present a new approach for integrating continuous state variables into spatially-explicit STSMs. Specifically we allow any number of continuous state variables to be defined for each spatial cell in our simulations; the value of each continuous variable is then simulated forward in discrete time as a stochastic process based upon defined rates of change between variables. These rates can be defined as a function of the realized states and transitions of each cell in the STSM, thus providing a connection between the continuous variables and the dynamics of the landscape. We demonstrate this new approach by (1) developing a simple IPCC Tier 3 compliant model of ecosystem carbon biomass, where the continuous state variables are defined as terrestrial carbon biomass pools and the rates of change as carbon fluxes between pools, and (2) integrating this carbon model with an existing LULC change model for the state of Hawaii, USA.

  5. Changes in atmospheric variability in a glacial climate and the impacts on proxy data: a model intercomparison

    Directory of Open Access Journals (Sweden)

    F. S. R. Pausata

    2009-09-01

    Full Text Available Using four different climate models, we investigate sea level pressure variability in the extratropical North Atlantic in the preindustrial climate (1750 AD and at the Last Glacial Maximum (LGM, 21 kyrs before present in order to understand how changes in atmospheric circulation can affect signals recorded in climate proxies.

    In general, the models exhibit a significant reduction in interannual variance of sea level pressure at the LGM compared to pre-industrial simulations and this reduction is concentrated in winter. For the preindustrial climate, all models feature a similar leading mode of sea level pressure variability that resembles the leading mode of variability in the instrumental record: the North Atlantic Oscillation (NAO. In contrast, the leading mode of sea level pressure variability at the LGM is model dependent, but in each model different from that in the preindustrial climate. In each model, the leading (NAO-like mode of variability explains a smaller fraction of the variance and also less absolute variance at the LGM than in the preindustrial climate.

    The models show that the relationship between atmospheric variability and surface climate (temperature and precipitation variability change in different climates. Results are model-specific, but indicate that proxy signals at the LGM may be misinterpreted if changes in the spatial pattern and seasonality of surface climate variability are not taken into account.

  6. Modelling and control of variable speed wind turbines for power system studies

    DEFF Research Database (Denmark)

    Michalke, Gabriele; Hansen, Anca Daniela

    2010-01-01

    and implemented in the power system simulation tool DIgSILENT. Important issues like the fault ride-through and grid support capabilities of these wind turbine concepts are addressed. The paper reveals that advanced control of variable speed wind turbines can improve power system stability. Finally......, it will be shown in the paper that wind parks consisting of variable speed wind turbines can help nearby connected fixed speed wind turbines to ride-through grid faults. Copyright © 2009 John Wiley & Sons, Ltd.......Modern wind turbines are predominantly variable speed wind turbines with power electronic interface. Emphasis in this paper is therefore on the modelling and control issues of these wind turbine concepts and especially on their impact on the power system. The models and control are developed...

  7. Developing Baltic cod recruitment models II : Incorporation of environmental variability and species interaction

    DEFF Research Database (Denmark)

    Köster, Fritz; Hinrichsen, H.H.; St. John, Michael

    2001-01-01

    We investigate whether a process-oriented approach based on the results of field, laboratory, and modelling studies can be used to develop a stock-environment-recruitment model for Central Baltic cod (Gadus morhua). Based on exploratory statistical analysis, significant variables influencing...... cod in these areas, suggesting that key biotic and abiotic processes can be successfully incorporated into recruitment models....... survival of early life stages and varying systematically among spawning sites were incorporated into stock-recruitment models, first for major cod spawning sites and then combined for the entire Central Baltic. Variables identified included potential egg production by the spawning stock, abiotic conditions...

  8. Variable slip wind generator modeling for real-time simulation

    Energy Technology Data Exchange (ETDEWEB)

    Gagnon, R.; Brochu, J.; Turmel, G. [Hydro-Quebec, Varennes, PQ (Canada). IREQ

    2006-07-01

    A model of a wind turbine using a variable slip wound-rotor induction machine was presented. The model was created as part of a library of generic wind generator models intended for wind integration studies. The stator winding of the wind generator was connected directly to the grid and the rotor was driven by the turbine through a drive train. The variable resistors was synthesized by an external resistor in parallel with a diode rectifier. A forced-commutated power electronic device (IGBT) was connected to the wound rotor by slip rings and brushes. Simulations were conducted in a Matlab/Simulink environment using SimPowerSystems blocks to model power systems elements and Simulink blocks to model the turbine, control system and drive train. Detailed descriptions of the turbine, the drive train and the control system were provided. The model's implementation in the simulator was also described. A case study demonstrating the real-time simulation of a wind generator connected at the distribution level of a power system was presented. Results of the case study were then compared with results obtained from the SimPowerSystems off-line simulation. Results showed good agreement between the waveforms, demonstrating the conformity of the real-time and the off-line simulations. The capability of Hypersim for real-time simulation of wind turbines with power electronic converters in a distribution network was demonstrated. It was concluded that hardware-in-the-loop (HIL) simulation of wind turbine controllers for wind integration studies in power systems is now feasible. 5 refs., 1 tab., 6 figs.

  9. A QSAR Study of Environmental Estrogens Based on a Novel Variable Selection Method

    Directory of Open Access Journals (Sweden)

    Aiqian Zhang

    2012-05-01

    Full Text Available A large number of descriptors were employed to characterize the molecular structure of 53 natural, synthetic, and environmental chemicals which are suspected of disrupting endocrine functions by mimicking or antagonizing natural hormones and may thus pose a serious threat to the health of humans and wildlife. In this work, a robust quantitative structure-activity relationship (QSAR model with a novel variable selection method has been proposed for the effective estrogens. The variable selection method is based on variable interaction (VSMVI with leave-multiple-out cross validation (LMOCV to select the best subset. During variable selection, model construction and assessment, the Organization for Economic Co-operation and Development (OECD principles for regulation of QSAR acceptability were fully considered, such as using an unambiguous multiple-linear regression (MLR algorithm to build the model, using several validation methods to assessment the performance of the model, giving the define of applicability domain and analyzing the outliers with the results of molecular docking. The performance of the QSAR model indicates that the VSMVI is an effective, feasible and practical tool for rapid screening of the best subset from large molecular descriptors.

  10. Estimating structural equation models with non-normal variables by using transformations

    NARCIS (Netherlands)

    Montfort, van K.; Mooijaart, A.; Meijerink, F.

    2009-01-01

    We discuss structural equation models for non-normal variables. In this situation the maximum likelihood and the generalized least-squares estimates of the model parameters can give incorrect estimates of the standard errors and the associated goodness-of-fit chi-squared statistics. If the sample

  11. Explicit estimating equations for semiparametric generalized linear latent variable models

    KAUST Repository

    Ma, Yanyuan

    2010-07-05

    We study generalized linear latent variable models without requiring a distributional assumption of the latent variables. Using a geometric approach, we derive consistent semiparametric estimators. We demonstrate that these models have a property which is similar to that of a sufficient complete statistic, which enables us to simplify the estimating procedure and explicitly to formulate the semiparametric estimating equations. We further show that the explicit estimators have the usual root n consistency and asymptotic normality. We explain the computational implementation of our method and illustrate the numerical performance of the estimators in finite sample situations via extensive simulation studies. The advantage of our estimators over the existing likelihood approach is also shown via numerical comparison. We employ the method to analyse a real data example from economics. © 2010 Royal Statistical Society.

  12. Modeling and fabrication of an RF MEMS variable capacitor with a fractal geometry

    KAUST Repository

    Elshurafa, Amro M.

    2013-08-16

    In this paper, we model, fabricate, and measure an electrostatically actuated MEMS variable capacitor that utilizes a fractal geometry and serpentine-like suspension arms. Explicitly, a variable capacitor that possesses a top suspended plate with a specific fractal geometry and also possesses a bottom fixed plate complementary in shape to the top plate has been fabricated in the PolyMUMPS process. An important benefit that was achieved from using the fractal geometry in designing the MEMS variable capacitor is increasing the tuning range of the variable capacitor beyond the typical ratio of 1.5. The modeling was carried out using the commercially available finite element software COMSOL to predict both the tuning range and pull-in voltage. Measurement results show that the tuning range is 2.5 at a maximum actuation voltage of 10V.

  13. Can climate variability information constrain a hydrological model for an ungauged Costa Rican catchment?

    Science.gov (United States)

    Quesada-Montano, Beatriz; Westerberg, Ida K.; Fuentes-Andino, Diana; Hidalgo-Leon, Hugo; Halldin, Sven

    2017-04-01

    Long-term hydrological data are key to understanding catchment behaviour and for decision making within water management and planning. Given the lack of observed data in many regions worldwide, hydrological models are an alternative for reproducing historical streamflow series. Additional types of information - to locally observed discharge - can be used to constrain model parameter uncertainty for ungauged catchments. Climate variability exerts a strong influence on streamflow variability on long and short time scales, in particular in the Central-American region. We therefore explored the use of climate variability knowledge to constrain the simulated discharge uncertainty of a conceptual hydrological model applied to a Costa Rican catchment, assumed to be ungauged. To reduce model uncertainty we first rejected parameter relationships that disagreed with our understanding of the system. We then assessed how well climate-based constraints applied at long-term, inter-annual and intra-annual time scales could constrain model uncertainty. Finally, we compared the climate-based constraints to a constraint on low-flow statistics based on information obtained from global maps. We evaluated our method in terms of the ability of the model to reproduce the observed hydrograph and the active catchment processes in terms of two efficiency measures, a statistical consistency measure, a spread measure and 17 hydrological signatures. We found that climate variability knowledge was useful for reducing model uncertainty, in particular, unrealistic representation of deep groundwater processes. The constraints based on global maps of low-flow statistics provided more constraining information than those based on climate variability, but the latter rejected slow rainfall-runoff representations that the low flow statistics did not reject. The use of such knowledge, together with information on low-flow statistics and constraints on parameter relationships showed to be useful to

  14. Analysis and modeling of wafer-level process variability in 28 nm FD-SOI using split C-V measurements

    Science.gov (United States)

    Pradeep, Krishna; Poiroux, Thierry; Scheer, Patrick; Juge, André; Gouget, Gilles; Ghibaudo, Gérard

    2018-07-01

    This work details the analysis of wafer level global process variability in 28 nm FD-SOI using split C-V measurements. The proposed approach initially evaluates the native on wafer process variability using efficient extraction methods on split C-V measurements. The on-wafer threshold voltage (VT) variability is first studied and modeled using a simple analytical model. Then, a statistical model based on the Leti-UTSOI compact model is proposed to describe the total C-V variability in different bias conditions. This statistical model is finally used to study the contribution of each process parameter to the total C-V variability.

  15. Using genetic algorithms to calibrate the user-defined parameters of IIST model for SBLOCA analysis

    International Nuclear Information System (INIS)

    Tsai, Chiung-Wen; Shih, Chunkuan; Wang, Jong-Rong

    2014-01-01

    Highlights: • The genetic algorithm is proposed to search the user-defined parameters of important correlations. • The TRACE IIST model was employed as a case study to demonstrate the capability of GAs. • The multi-objective optimization strategy was incorporated to evaluate multi objective functions simultaneously. - Abstract: The thermal–hydraulic system codes, i.e., TRACE, have been designed to predict, investigate, and simulate nuclear reactor transients and accidents. Implementing relevant correlations, these codes are able to represent important phenomena such as two-phase flow, critical flow, and countercurrent flow. Furthermore, the thermal–hydraulic system codes permit users to modify the coefficients corresponding to the correlations, providing a certain degree of freedom to calibrate the numerical results, i.e., peak cladding temperature. These coefficients are known as user-defined parameters (UDPs). Practically, defining a series of UDPs is complex, highly relied on expert opinions and engineering experiences. This study proposes another approach – the genetic algorithms (GAs), providing rigorous procedures and mitigating human judgments and mistakes, to calibrate the UDPs of important correlations for a 2% small break loss of coolant accident (SBLOCA). The TRACE IIST model was employed as a case study to demonstrate the capability of GAs. The UDPs were evolved by GAs to reduce the deviations between TRACE results and IIST experimental data

  16. Correlation Analysis of Water Demand and Predictive Variables for Short-Term Forecasting Models

    Directory of Open Access Journals (Sweden)

    B. M. Brentan

    2017-01-01

    Full Text Available Operational and economic aspects of water distribution make water demand forecasting paramount for water distribution systems (WDSs management. However, water demand introduces high levels of uncertainty in WDS hydraulic models. As a result, there is growing interest in developing accurate methodologies for water demand forecasting. Several mathematical models can serve this purpose. One crucial aspect is the use of suitable predictive variables. The most used predictive variables involve weather and social aspects. To improve the interrelation knowledge between water demand and various predictive variables, this study applies three algorithms, namely, classical Principal Component Analysis (PCA and machine learning powerful algorithms such as Self-Organizing Maps (SOMs and Random Forest (RF. We show that these last algorithms help corroborate the results found by PCA, while they are able to unveil hidden features for PCA, due to their ability to cope with nonlinearities. This paper presents a correlation study of three district metered areas (DMAs from Franca, a Brazilian city, exploring weather and social variables to improve the knowledge of residential demand for water. For the three DMAs, temperature, relative humidity, and hour of the day appear to be the most important predictive variables to build an accurate regression model.

  17. A latent class distance association model for cross-classified data with a categorical response variable.

    Science.gov (United States)

    Vera, José Fernando; de Rooij, Mark; Heiser, Willem J

    2014-11-01

    In this paper we propose a latent class distance association model for clustering in the predictor space of large contingency tables with a categorical response variable. The rows of such a table are characterized as profiles of a set of explanatory variables, while the columns represent a single outcome variable. In many cases such tables are sparse, with many zero entries, which makes traditional models problematic. By clustering the row profiles into a few specific classes and representing these together with the categories of the response variable in a low-dimensional Euclidean space using a distance association model, a parsimonious prediction model can be obtained. A generalized EM algorithm is proposed to estimate the model parameters and the adjusted Bayesian information criterion statistic is employed to test the number of mixture components and the dimensionality of the representation. An empirical example highlighting the advantages of the new approach and comparing it with traditional approaches is presented. © 2014 The British Psychological Society.

  18. Role of updraft velocity in temporal variability of global cloud hydrometeor number

    Science.gov (United States)

    Sullivan, Sylvia C.; Lee, Dongmin; Oreopoulos, Lazaros; Nenes, Athanasios

    2016-05-01

    Understanding how dynamical and aerosol inputs affect the temporal variability of hydrometeor formation in climate models will help to explain sources of model diversity in cloud forcing, to provide robust comparisons with data, and, ultimately, to reduce the uncertainty in estimates of the aerosol indirect effect. This variability attribution can be done at various spatial and temporal resolutions with metrics derived from online adjoint sensitivities of droplet and crystal number to relevant inputs. Such metrics are defined and calculated from simulations using the NASA Goddard Earth Observing System Model, Version 5 (GEOS-5) and the National Center for Atmospheric Research Community Atmosphere Model Version 5.1 (CAM5.1). Input updraft velocity fluctuations can explain as much as 48% of temporal variability in output ice crystal number and 61% in droplet number in GEOS-5 and up to 89% of temporal variability in output ice crystal number in CAM5.1. In both models, this vertical velocity attribution depends strongly on altitude. Despite its importance for hydrometeor formation, simulated vertical velocity distributions are rarely evaluated against observations due to the sparsity of relevant data. Coordinated effort by the atmospheric community to develop more consistent, observationally based updraft treatments will help to close this knowledge gap.

  19. Expressiveness and definability in circumscription

    Directory of Open Access Journals (Sweden)

    Francicleber Martins Ferreira

    2011-06-01

    Full Text Available We investigate expressiveness and definability issues with respect to minimal models, particularly in the scope of Circumscription. First, we give a proof of the failure of the Löwenheim-Skolem Theorem for Circumscription. Then we show that, if the class of P; Z-minimal models of a first-order sentence is Δ-elementary, then it is elementary. That is, whenever the circumscription of a first-order sentence is equivalent to a first-order theory, then it is equivalent to a finitely axiomatizable one. This means that classes of models of circumscribed theories are either elementary or not Δ-elementary. Finally, using the previous result, we prove that, whenever a relation Pi is defined in the class of P; Z-minimal models of a first-order sentence Φ and whenever such class of P; Z-minimal models is Δ-elementary, then there is an explicit definition ψ for Pi such that the class of P; Z-minimal models of Φ is the class of models of Φ ∧ ψ. In order words, the circumscription of P in Φ with Z varied can be replaced by Φ plus this explicit definition ψ for Pi.

  20. Spatio-temporal Variability of Albedo and its Impact on Glacier Melt Modelling

    Science.gov (United States)

    Kinnard, C.; Mendoza, C.; Abermann, J.; Petlicki, M.; MacDonell, S.; Urrutia, R.

    2017-12-01

    Albedo is an important variable for the surface energy balance of glaciers, yet its representation within distributed glacier mass-balance models is often greatly simplified. Here we study the spatio-temporal evolution of albedo on Glacier Universidad, central Chile (34°S, 70°W), using time-lapse terrestrial photography, and investigate its effect on the shortwave radiation balance and modelled melt rates. A 12 megapixel digital single-lens reflex camera was setup overlooking the glacier and programmed to take three daily images of the glacier during a two-year period (2012-2014). One image was chosen for each day with no cloud shading on the glacier. The RAW images were projected onto a 10m resolution digital elevation model (DEM), using the IMGRAFT software (Messerli and Grinsted, 2015). A six-parameter camera model was calibrated using a single image and a set of 17 ground control points (GCPs), yielding a georeferencing accuracy of accounting for possible camera movement over time. The reflectance values from the projected image were corrected for topographic and atmospheric influences using a parametric solar irradiation model, following a modified algorithm based on Corripio (2004), and then converted to albedo using reference albedo measurements from an on-glacier automatic weather station (AWS). The image-based albedo was found to compare well with independent albedo observations from a second AWS in the glacier accumulation area. Analysis of the albedo maps showed that the albedo is more spatially-variable than the incoming solar radiation, making albedo a more important factor of energy balance spatial variability. The incorporation of albedo maps within an enhanced temperature index melt model revealed that the spatio-temporal variability of albedo is an important factor for the calculation of glacier-wide meltwater fluxes.

  1. Forecasting short-run crude oil price using high- and low-inventory variables

    International Nuclear Information System (INIS)

    Ye, Michael; Zyren, John; Shore, Joanne

    2006-01-01

    Since inventories have a lower bound or a minimum operating level, economic literature suggests a nonlinear relationship between inventory level and commodity prices. This was found to be the case in the short-run crude oil market. In order to explore this inventory-price relationship, two nonlinear inventory variables are defined and derived from the monthly normal level and relative level of OECD crude oil inventories from post 1991 Gulf War to October 2003: one for the low inventory state and another for the high inventory state of the crude oil market. Incorporation of low- and high-inventory variables in a single equation model to forecast short-run WTI crude oil prices enhances the model fit and forecast ability

  2. Separation of variables in anisotropic models and non-skew-symmetric elliptic r-matrix

    Science.gov (United States)

    Skrypnyk, Taras

    2017-05-01

    We solve a problem of separation of variables for the classical integrable hamiltonian systems possessing Lax matrices satisfying linear Poisson brackets with the non-skew-symmetric, non-dynamical elliptic so(3)⊗ so(3)-valued classical r-matrix. Using the corresponding Lax matrices, we present a general form of the "separating functions" B( u) and A( u) that generate the coordinates and the momenta of separation for the associated models. We consider several examples and perform the separation of variables for the classical anisotropic Euler's top, Steklov-Lyapunov model of the motion of anisotropic rigid body in the liquid, two-spin generalized Gaudin model and "spin" generalization of Steklov-Lyapunov model.

  3. A New Integrated Weighted Model in SNOW-V10: Verification of Categorical Variables

    Science.gov (United States)

    Huang, Laura X.; Isaac, George A.; Sheng, Grant

    2014-01-01

    This paper presents the verification results for nowcasts of seven categorical variables from an integrated weighted model (INTW) and the underlying numerical weather prediction (NWP) models. Nowcasting, or short range forecasting (0-6 h), over complex terrain with sufficient accuracy is highly desirable but a very challenging task. A weighting, evaluation, bias correction and integration system (WEBIS) for generating nowcasts by integrating NWP forecasts and high frequency observations was used during the Vancouver 2010 Olympic and Paralympic Winter Games as part of the Science of Nowcasting Olympic Weather for Vancouver 2010 (SNOW-V10) project. Forecast data from Canadian high-resolution deterministic NWP system with three nested grids (at 15-, 2.5- and 1-km horizontal grid-spacing) were selected as background gridded data for generating the integrated nowcasts. Seven forecast variables of temperature, relative humidity, wind speed, wind gust, visibility, ceiling and precipitation rate are treated as categorical variables for verifying the integrated weighted forecasts. By analyzing the verification of forecasts from INTW and the NWP models among 15 sites, the integrated weighted model was found to produce more accurate forecasts for the 7 selected forecast variables, regardless of location. This is based on the multi-categorical Heidke skill scores for the test period 12 February to 21 March 2010.

  4. Hierarchical Bayesian models to assess between- and within-batch variability of pathogen contamination in food.

    Science.gov (United States)

    Commeau, Natalie; Cornu, Marie; Albert, Isabelle; Denis, Jean-Baptiste; Parent, Eric

    2012-03-01

    Assessing within-batch and between-batch variability is of major interest for risk assessors and risk managers in the context of microbiological contamination of food. For example, the ratio between the within-batch variability and the between-batch variability has a large impact on the results of a sampling plan. Here, we designed hierarchical Bayesian models to represent such variability. Compatible priors were built mathematically to obtain sound model comparisons. A numeric criterion is proposed to assess the contamination structure comparing the ability of the models to replicate grouped data at the batch level using a posterior predictive loss approach. Models were applied to two case studies: contamination by Listeria monocytogenes of pork breast used to produce diced bacon and contamination by the same microorganism on cold smoked salmon at the end of the process. In the first case study, a contamination structure clearly exists and is located at the batch level, that is, between batches variability is relatively strong, whereas in the second a structure also exists but is less marked. © 2012 Society for Risk Analysis.

  5. Variable sound speed in interacting dark energy models

    Science.gov (United States)

    Linton, Mark S.; Pourtsidou, Alkistis; Crittenden, Robert; Maartens, Roy

    2018-04-01

    We consider a self-consistent and physical approach to interacting dark energy models described by a Lagrangian, and identify a new class of models with variable dark energy sound speed. We show that if the interaction between dark energy in the form of quintessence and cold dark matter is purely momentum exchange this generally leads to a dark energy sound speed that deviates from unity. Choosing a specific sub-case, we study its phenomenology by investigating the effects of the interaction on the cosmic microwave background and linear matter power spectrum. We also perform a global fitting of cosmological parameters using CMB data, and compare our findings to ΛCDM.

  6. Global modeling of land water and energy balances. Part III: Interannual variability

    Science.gov (United States)

    Shmakin, A.B.; Milly, P.C.D.; Dunne, K.A.

    2002-01-01

    The Land Dynamics (LaD) model is tested by comparison with observations of interannual variations in discharge from 44 large river basins for which relatively accurate time series of monthly precipitation (a primary model input) have recently been computed. When results are pooled across all basins, the model explains 67% of the interannual variance of annual runoff ratio anomalies (i.e., anomalies of annual discharge volume, normalized by long-term mean precipitation volume). The new estimates of basin precipitation appear to offer an improvement over those from a state-of-the-art analysis of global precipitation (the Climate Prediction Center Merged Analysis of Precipitation, CMAP), judging from comparisons of parallel model runs and of analyses of precipitation-discharge correlations. When the new precipitation estimates are used, the performance of the LaD model is comparable to, but not significantly better than, that of a simple, semiempirical water-balance relation that uses only annual totals of surface net radiation and precipitation. This implies that the LaD simulations of interannual runoff variability do not benefit substantially from information on geographical variability of land parameters or seasonal structure of interannual variability of precipitation. The aforementioned analyses necessitated the development of a method for downscaling of long-term monthly precipitation data to the relatively short timescales necessary for running the model. The method merges the long-term data with a reference dataset of 1-yr duration, having high temporal resolution. The success of the method, for the model and data considered here, was demonstrated in a series of model-model comparisons and in the comparisons of modeled and observed interannual variations of basin discharge.

  7. Modelling Seasonal GWR of Daily PM2.5 with Proper Auxiliary Variables for the Yangtze River Delta

    Directory of Open Access Journals (Sweden)

    Man Jiang

    2017-04-01

    Full Text Available Over the past decades, regional haze episodes have frequently occurred in eastern China, especially in the Yangtze River Delta (YRD. Satellite derived Aerosol Optical Depth (AOD has been used to retrieve the spatial coverage of PM2.5 concentrations. To improve the retrieval accuracy of the daily AOD-PM2.5 model, various auxiliary variables like meteorological or geographical factors have been adopted into the Geographically Weighted Regression (GWR model. However, these variables are always arbitrarily selected without deep consideration of their potentially varying temporal or spatial contributions in the model performance. In this manuscript, we put forward an automatic procedure to select proper auxiliary variables from meteorological and geographical factors and obtain their optimal combinations to construct four seasonal GWR models. We employ two different schemes to comprehensively test the performance of our proposed GWR models: (1 comparison with other regular GWR models by varying the number of auxiliary variables; and (2 comparison with observed ground-level PM2.5 concentrations. The result shows that our GWR models of “AOD + 3” with three common meteorological variables generally perform better than all the other GWR models involved. Our models also show powerful prediction capabilities in PM2.5 concentrations with only slight overfitting. The determination coefficients R2 of our seasonal models are 0.8259 in spring, 0.7818 in summer, 0.8407 in autumn, and 0.7689 in winter. Also, the seasonal models in summer and autumn behave better than those in spring and winter. The comparison between seasonal and yearly models further validates the specific seasonal pattern of auxiliary variables of the GWR model in the YRD. We also stress the importance of key variables and propose a selection process in the AOD-PM2.5 model. Our work validates the significance of proper auxiliary variables in modelling the AOD-PM2.5 relationships and

  8. Measuring behaviours for escaping from house fires: use of latent variable models to summarise multiple behaviours.

    Science.gov (United States)

    Ploubidis, G B; Edwards, P; Kendrick, D

    2015-12-15

    This paper reports the development and testing of a construct measuring parental fire safety behaviours for planning escape from a house fire. Latent variable modelling of data on parental-reported fire safety behaviours and plans for escaping from a house fire and multivariable logistic regression to quantify the association between groups defined by the latent variable modelling and parental-report of having a plan for escaping from a house fire. Data comes from 1112 participants in a cluster randomised controlled trial set in children's centres in 4 study centres in the UK. A two class model provided the best fit to the data, combining responses to five fire safety planning behaviours. The first group ('more behaviours for escaping from a house fire') comprised 86% of participants who were most likely to have a torch, be aware of how their smoke alarm sounds, to have external door and window keys accessible, and exits clear. The second group ('fewer behaviours for escaping from a house fire') comprised 14% of participants who were less likely to report these five behaviours. After adjusting for potential confounders, participants allocated to the 'more behaviours for escaping from a house fire group were 2.5 times more likely to report having an escape plan (OR 2.48; 95% CI 1.59-3.86) than those in the "fewer behaviours for escaping from a house fire" group. Multiple fire safety behaviour questions can be combined into a single binary summary measure of fire safety behaviours for escaping from a house fire. Our findings will be useful to future studies wishing to use a single measure of fire safety planning behaviour as measures of outcome or exposure. NCT 01452191. Date of registration 13/10/2011.

  9. Modeling Short-Range Soil Variability and its Potential Use in Variable-Rate Treatment of Experimental Plots

    Directory of Open Access Journals (Sweden)

    A Moameni

    2011-02-01

    Full Text Available Abstract In Iran, the experimental plots under fertilizer trials are managed in such a way that the whole plot area uniformly receives agricultural inputs. This could lead to biased research results and hence to suppressing of the efforts made by the researchers. This research was conducted in a selected site belonging to the Gonbad Agricultural Research Station, located in the semiarid region, northeastern Iran. The aim was to characterize the short-range spatial variability of the inherent and management-depended soil properties and to determine if this variation is large and can be managed at practical scales. The soils were sampled using a grid 55 m apart. In total, 100 composite soil samples were collected from topsoil (0-30 cm and were analyzed for calcium carbonate equivalent, organic carbon, clay, available phosphorus, available potassium, iron, copper, zinc and manganese. Descriptive statistics were applied to check data trends. Geostatistical analysis was applied to variography, model fitting and contour mapping. Sampling at 55 m made it possible to split the area of the selected experimental plot into relatively uniform areas that allow application of agricultural inputs with variable rates. Keywords: Short-range soil variability, Within-field soil variability, Interpolation, Precision agriculture, Geostatistics

  10. Modeling variably saturated multispecies reactive groundwater solute transport with MODFLOW-UZF and RT3D

    Science.gov (United States)

    Bailey, Ryan T.; Morway, Eric D.; Niswonger, Richard G.; Gates, Timothy K.

    2013-01-01

    A numerical model was developed that is capable of simulating multispecies reactive solute transport in variably saturated porous media. This model consists of a modified version of the reactive transport model RT3D (Reactive Transport in 3 Dimensions) that is linked to the Unsaturated-Zone Flow (UZF1) package and MODFLOW. Referred to as UZF-RT3D, the model is tested against published analytical benchmarks as well as other published contaminant transport models, including HYDRUS-1D, VS2DT, and SUTRA, and the coupled flow and transport modeling system of CATHY and TRAN3D. Comparisons in one-dimensional, two-dimensional, and three-dimensional variably saturated systems are explored. While several test cases are included to verify the correct implementation of variably saturated transport in UZF-RT3D, other cases are included to demonstrate the usefulness of the code in terms of model run-time and handling the reaction kinetics of multiple interacting species in variably saturated subsurface systems. As UZF1 relies on a kinematic-wave approximation for unsaturated flow that neglects the diffusive terms in Richards equation, UZF-RT3D can be used for large-scale aquifer systems for which the UZF1 formulation is reasonable, that is, capillary-pressure gradients can be neglected and soil parameters can be treated as homogeneous. Decreased model run-time and the ability to include site-specific chemical species and chemical reactions make UZF-RT3D an attractive model for efficient simulation of multispecies reactive transport in variably saturated large-scale subsurface systems.

  11. Simplicity constraints: A 3D toy model for loop quantum gravity

    Science.gov (United States)

    Charles, Christoph

    2018-05-01

    In loop quantum gravity, tremendous progress has been made using the Ashtekar-Barbero variables. These variables, defined in a gauge fixing of the theory, correspond to a parametrization of the solutions of the so-called simplicity constraints. Their geometrical interpretation is however unsatisfactory as they do not constitute a space-time connection. It would be possible to resolve this point by using a full Lorentz connection or, equivalently, by using the self-dual Ashtekar variables. This leads however to simplicity constraints or reality conditions which are notoriously difficult to implement in the quantum theory. We explore in this paper the possibility of using completely degenerate actions to impose such constraints at the quantum level in the context of canonical quantization. To do so, we define a simpler model, in 3D, with similar constraints by extending the phase space to include an independent vielbein. We define the classical model and show that a precise quantum theory by gauge unfixing can be defined out of it, completely equivalent to the standard 3D Euclidean quantum gravity. We discuss possible future explorations around this model as it could help as a stepping stone to define full-fledged covariant loop quantum gravity.

  12. Abstract: Inference and Interval Estimation for Indirect Effects With Latent Variable Models.

    Science.gov (United States)

    Falk, Carl F; Biesanz, Jeremy C

    2011-11-30

    Models specifying indirect effects (or mediation) and structural equation modeling are both popular in the social sciences. Yet relatively little research has compared methods that test for indirect effects among latent variables and provided precise estimates of the effectiveness of different methods. This simulation study provides an extensive comparison of methods for constructing confidence intervals and for making inferences about indirect effects with latent variables. We compared the percentile (PC) bootstrap, bias-corrected (BC) bootstrap, bias-corrected accelerated (BC a ) bootstrap, likelihood-based confidence intervals (Neale & Miller, 1997), partial posterior predictive (Biesanz, Falk, and Savalei, 2010), and joint significance tests based on Wald tests or likelihood ratio tests. All models included three reflective latent variables representing the independent, dependent, and mediating variables. The design included the following fully crossed conditions: (a) sample size: 100, 200, and 500; (b) number of indicators per latent variable: 3 versus 5; (c) reliability per set of indicators: .7 versus .9; (d) and 16 different path combinations for the indirect effect (α = 0, .14, .39, or .59; and β = 0, .14, .39, or .59). Simulations were performed using a WestGrid cluster of 1680 3.06GHz Intel Xeon processors running R and OpenMx. Results based on 1,000 replications per cell and 2,000 resamples per bootstrap method indicated that the BC and BC a bootstrap methods have inflated Type I error rates. Likelihood-based confidence intervals and the PC bootstrap emerged as methods that adequately control Type I error and have good coverage rates.

  13. Use of variability modes to evaluate AR4 climate models over the Euro-Atlantic region

    Energy Technology Data Exchange (ETDEWEB)

    Casado, M.J.; Pastor, M.A. [Agencia Estatal de Meteorologia (AEMET), Madrid (Spain)

    2012-01-15

    This paper analyzes the ability of the multi-model simulations from the Fourth Assessment Report of the Intergovernmental Panel on Climate Change (IPCC) to simulate the main leading modes of variability over the Euro-Atlantic region in winter: the North-Atlantic Oscillation (NAO), the Scandinavian mode (SCAND), the East/Atlantic Oscillation (EA) and the East Atlantic/Western Russia mode (EA/WR). These modes of variability have been evaluated both spatially, by analyzing the intensity and location of their anomaly centres, as well as temporally, by focusing on the probability density functions and e-folding time scales. The choice of variability modes as a tool for climate model assessment can be justified by the fact that modes of variability determine local climatic conditions and their likely change may have important implications for future climate changes. It is found that all the models considered are able to simulate reasonably well these four variability modes, the SCAND being the mode which is best spatially simulated. From a temporal point of view the NAO and SCAND modes are the best simulated. UKMO-HadGEM1 and CGCM3.1(T63) are the models best at reproducing spatial characteristics, whereas CCSM3 and CGCM3.1(T63) are the best ones with regard to the temporal features. GISS-AOM is the model showing the worst performance, in terms of both spatial and temporal features. These results may bring new insight into the selection and use of specific models to simulate Euro-Atlantic climate, with some models being clearly more successful in simulating patterns of temporal and spatial variability than others. (orig.)

  14. Flight Dynamic Model Exchange using XML

    Science.gov (United States)

    Jackson, E. Bruce; Hildreth, Bruce L.

    2002-01-01

    The AIAA Modeling and Simulation Technical Committee has worked for several years to develop a standard by which the information needed to develop physics-based models of aircraft can be specified. The purpose of this standard is to provide a well-defined set of information, definitions, data tables and axis systems so that cooperating organizations can transfer a model from one simulation facility to another with maximum efficiency. This paper proposes using an application of the eXtensible Markup Language (XML) to implement the AIAA simulation standard. The motivation and justification for using a standard such as XML is discussed. Necessary data elements to be supported are outlined. An example of an aerodynamic model as an XML file is given. This example includes definition of independent and dependent variables for function tables, definition of key variables used to define the model, and axis systems used. The final steps necessary for implementation of the standard are presented. Software to take an XML-defined model and import/export it to/from a given simulation facility is discussed, but not demonstrated. That would be the next step in final implementation of standards for physics-based aircraft dynamic models.

  15. Nonlinear dynamic modeling of a simple flexible rotor system subjected to time-variable base motions

    Science.gov (United States)

    Chen, Liqiang; Wang, Jianjun; Han, Qinkai; Chu, Fulei

    2017-09-01

    Rotor systems carried in transportation system or under seismic excitations are considered to have a moving base. To study the dynamic behavior of flexible rotor systems subjected to time-variable base motions, a general model is developed based on finite element method and Lagrange's equation. Two groups of Euler angles are defined to describe the rotation of the rotor with respect to the base and that of the base with respect to the ground. It is found that the base rotations would cause nonlinearities in the model. To verify the proposed model, a novel test rig which could simulate the base angular-movement is designed. Dynamic experiments on a flexible rotor-bearing system with base angular motions are carried out. Based upon these, numerical simulations are conducted to further study the dynamic response of the flexible rotor under harmonic angular base motions. The effects of base angular amplitude, rotating speed and base frequency on response behaviors are discussed by means of FFT, waterfall, frequency response curve and orbits of the rotor. The FFT and waterfall plots of the disk horizontal and vertical vibrations are marked with multiplications of the base frequency and sum and difference tones of the rotating frequency and the base frequency. Their amplitudes will increase remarkably when they meet the whirling frequencies of the rotor system.

  16. Definably compact groups definable in real closed fields. I

    OpenAIRE

    Barriga, Eliana

    2017-01-01

    We study definably compact definably connected groups definable in a sufficiently saturated real closed field $R$. We introduce the notion of group-generic point for $\\bigvee$-definable groups and show the existence of group-generic points for definably compact groups definable in a sufficiently saturated o-minimal expansion of a real closed field. We use this notion along with some properties of generic sets to prove that for every definably compact definably connected group $G$ definable in...

  17. Latent variable models an introduction to factor, path, and structural equation analysis

    CERN Document Server

    Loehlin, John C

    2004-01-01

    This fourth edition introduces multiple-latent variable models by utilizing path diagrams to explain the underlying relationships in the models. The book is intended for advanced students and researchers in the areas of social, educational, clinical, ind

  18. Beyond a climate-centric view of plant distribution: edaphic variables add value to distribution models.

    Science.gov (United States)

    Beauregard, Frieda; de Blois, Sylvie

    2014-01-01

    Both climatic and edaphic conditions determine plant distribution, however many species distribution models do not include edaphic variables especially over large geographical extent. Using an exceptional database of vegetation plots (n = 4839) covering an extent of ∼55,000 km2, we tested whether the inclusion of fine scale edaphic variables would improve model predictions of plant distribution compared to models using only climate predictors. We also tested how well these edaphic variables could predict distribution on their own, to evaluate the assumption that at large extents, distribution is governed largely by climate. We also hypothesized that the relative contribution of edaphic and climatic data would vary among species depending on their growth forms and biogeographical attributes within the study area. We modelled 128 native plant species from diverse taxa using four statistical model types and three sets of abiotic predictors: climate, edaphic, and edaphic-climate. Model predictive accuracy and variable importance were compared among these models and for species' characteristics describing growth form, range boundaries within the study area, and prevalence. For many species both the climate-only and edaphic-only models performed well, however the edaphic-climate models generally performed best. The three sets of predictors differed in the spatial information provided about habitat suitability, with climate models able to distinguish range edges, but edaphic models able to better distinguish within-range variation. Model predictive accuracy was generally lower for species without a range boundary within the study area and for common species, but these effects were buffered by including both edaphic and climatic predictors. The relative importance of edaphic and climatic variables varied with growth forms, with trees being more related to climate whereas lower growth forms were more related to edaphic conditions. Our study identifies the potential

  19. Short-term to seasonal variability in factors driving primary productivity in a shallow estuary: Implications for modeling production

    Science.gov (United States)

    Canion, Andy; MacIntyre, Hugh L.; Phipps, Scott

    2013-10-01

    The inputs of primary productivity models may be highly variable on short timescales (hourly to daily) in turbid estuaries, but modeling of productivity in these environments is often implemented with data collected over longer timescales. Daily, seasonal, and spatial variability in primary productivity model parameters: chlorophyll a concentration (Chla), the downwelling light attenuation coefficient (kd), and photosynthesis-irradiance response parameters (Pmchl, αChl) were characterized in Weeks Bay, a nitrogen-impacted shallow estuary in the northern Gulf of Mexico. Variability in primary productivity model parameters in response to environmental forcing, nutrients, and microalgal taxonomic marker pigments were analysed in monthly and short-term datasets. Microalgal biomass (as Chla) was strongly related to total phosphorus concentration on seasonal scales. Hourly data support wind-driven resuspension as a major source of short-term variability in Chla and light attenuation (kd). The empirical relationship between areal primary productivity and a combined variable of biomass and light attenuation showed that variability in the photosynthesis-irradiance response contributed little to the overall variability in primary productivity, and Chla alone could account for 53-86% of the variability in primary productivity. Efforts to model productivity in similar shallow systems with highly variable microalgal biomass may benefit the most by investing resources in improving spatial and temporal resolution of chlorophyll a measurements before increasing the complexity of models used in productivity modeling.

  20. Genuine tripartite entangled states with a local hidden-variable model

    International Nuclear Information System (INIS)

    Toth, Geza; Acin, Antonio

    2006-01-01

    We present a family of three-qubit quantum states with a basic local hidden-variable model. Any von Neumann measurement can be described by a local model for these states. We show that some of these states are genuine three-partite entangled and also distillable. The generalization for larger dimensions or higher number of parties is also discussed. As a by-product, we present symmetric extensions of two-qubit Werner states

  1. A predictability study of Lorenz's 28-variable model as a dynamical system

    Science.gov (United States)

    Krishnamurthy, V.

    1993-01-01

    The dynamics of error growth in a two-layer nonlinear quasi-geostrophic model has been studied to gain an understanding of the mathematical theory of atmospheric predictability. The growth of random errors of varying initial magnitudes has been studied, and the relation between this classical approach and the concepts of the nonlinear dynamical systems theory has been explored. The local and global growths of random errors have been expressed partly in terms of the properties of an error ellipsoid and the Liapunov exponents determined by linear error dynamics. The local growth of small errors is initially governed by several modes of the evolving error ellipsoid but soon becomes dominated by the longest axis. The average global growth of small errors is exponential with a growth rate consistent with the largest Liapunov exponent. The duration of the exponential growth phase depends on the initial magnitude of the errors. The subsequent large errors undergo a nonlinear growth with a steadily decreasing growth rate and attain saturation that defines the limit of predictability. The degree of chaos and the largest Liapunov exponent show considerable variation with change in the forcing, which implies that the time variation in the external forcing can introduce variable character to the predictability.

  2. Future Scenarios for Software-Defined Metro and Access Networks and Software-Defined Photonics

    Directory of Open Access Journals (Sweden)

    Tommaso Muciaccia

    2017-01-01

    Full Text Available In recent years, architectures, devices, and components in telecommunication networks have been challenged by evolutionary and revolutionary factors which are drastically changing the traffic features. Most of these changes imply the need for major re-configurability and programmability not only in data-centers and core networks, but also in the metro-access segment. In a wide variety of contexts, this necessity has been addressed by the proposed introduction of the innovative paradigm of software-defined networks (SDNs. Several solutions inspired by the SDN model have been recently proposed also for metro and access networks, where the adoption of a new generation of software-defined reconfigurable integrated photonic devices is highly desirable. In this paper, we review the possible future application scenarios for software-defined metro and access networks and software-defined photonics (SDP, on the base of analytics, statistics, and surveys. This work describes the reasons underpinning the presented radical change of paradigm and summarizes the most significant solutions proposed in literature, with a specific emphasis to physical-layer reconfigurable networks and a focus on both architectures and devices.

  3. Reduction of the Random Variables of the Turbulent Wind Field

    DEFF Research Database (Denmark)

    Sichani, Mahdi Teimouri; Nielsen, Søren R.K.

    2012-01-01

    .e. Importance Sampling (IS) or Subset Simulation (SS), will be deteriorated on problems with many random variables. The problem with PDEM is that a multidimensional integral has to be carried out over the space defined by the random variables of the system. The numerical procedure requires discretization......Applicability of the Probability Density Evolution Method (PDEM) for realizing evolution of the probability density for the wind turbines has rather strict bounds on the basic number of the random variables involved in the model. The efficiency of most of the Advanced Monte Carlo (AMC) methods, i...... of the integral domain; this becomes increasingly difficult as the dimensions of the integral domain increase. On the other hand efficiency of the AMC methods is closely dependent on the design points of the problem. Presence of many random variables may increase the number of the design points, hence affects...

  4. Modelling food-web mediated effects of hydrological variability and environmental flows.

    Science.gov (United States)

    Robson, Barbara J; Lester, Rebecca E; Baldwin, Darren S; Bond, Nicholas R; Drouart, Romain; Rolls, Robert J; Ryder, Darren S; Thompson, Ross M

    2017-11-01

    Environmental flows are designed to enhance aquatic ecosystems through a variety of mechanisms; however, to date most attention has been paid to the effects on habitat quality and life-history triggers, especially for fish and vegetation. The effects of environmental flows on food webs have so far received little attention, despite food-web thinking being fundamental to understanding of river ecosystems. Understanding environmental flows in a food-web context can help scientists and policy-makers better understand and manage outcomes of flow alteration and restoration. In this paper, we consider mechanisms by which flow variability can influence and alter food webs, and place these within a conceptual and numerical modelling framework. We also review the strengths and weaknesses of various approaches to modelling the effects of hydrological management on food webs. Although classic bioenergetic models such as Ecopath with Ecosim capture many of the key features required, other approaches, such as biogeochemical ecosystem modelling, end-to-end modelling, population dynamic models, individual-based models, graph theory models, and stock assessment models are also relevant. In many cases, a combination of approaches will be useful. We identify current challenges and new directions in modelling food-web responses to hydrological variability and environmental flow management. These include better integration of food-web and hydraulic models, taking physiologically-based approaches to food quality effects, and better representation of variations in space and time that may create ecosystem control points. Crown Copyright © 2017. Published by Elsevier Ltd. All rights reserved.

  5. Childhood malnutrition in Egypt using geoadditive Gaussian and latent variable models.

    Science.gov (United States)

    Khatab, Khaled

    2010-04-01

    Major progress has been made over the last 30 years in reducing the prevalence of malnutrition amongst children less than 5 years of age in developing countries. However, approximately 27% of children under the age of 5 in these countries are still malnourished. This work focuses on the childhood malnutrition in one of the biggest developing countries, Egypt. This study examined the association between bio-demographic and socioeconomic determinants and the malnutrition problem in children less than 5 years of age using the 2003 Demographic and Health survey data for Egypt. In the first step, we use separate geoadditive Gaussian models with the continuous response variables stunting (height-for-age), underweight (weight-for-age), and wasting (weight-for-height) as indicators of nutritional status in our case study. In a second step, based on the results of the first step, we apply the geoadditive Gaussian latent variable model for continuous indicators in which the 3 measurements of the malnutrition status of children are assumed as indicators for the latent variable "nutritional status".

  6. Exploring structural variability in X-ray crystallographic models using protein local optimization by torsion-angle sampling

    International Nuclear Information System (INIS)

    Knight, Jennifer L.; Zhou, Zhiyong; Gallicchio, Emilio; Himmel, Daniel M.; Friesner, Richard A.; Arnold, Eddy; Levy, Ronald M.

    2008-01-01

    Torsion-angle sampling, as implemented in the Protein Local Optimization Program (PLOP), is used to generate multiple structurally variable single-conformer models which are in good agreement with X-ray data. An ensemble-refinement approach to differentiate between positional uncertainty and conformational heterogeneity is proposed. Modeling structural variability is critical for understanding protein function and for modeling reliable targets for in silico docking experiments. Because of the time-intensive nature of manual X-ray crystallographic refinement, automated refinement methods that thoroughly explore conformational space are essential for the systematic construction of structurally variable models. Using five proteins spanning resolutions of 1.0–2.8 Å, it is demonstrated how torsion-angle sampling of backbone and side-chain libraries with filtering against both the chemical energy, using a modern effective potential, and the electron density, coupled with minimization of a reciprocal-space X-ray target function, can generate multiple structurally variable models which fit the X-ray data well. Torsion-angle sampling as implemented in the Protein Local Optimization Program (PLOP) has been used in this work. Models with the lowest R free values are obtained when electrostatic and implicit solvation terms are included in the effective potential. HIV-1 protease, calmodulin and SUMO-conjugating enzyme illustrate how variability in the ensemble of structures captures structural variability that is observed across multiple crystal structures and is linked to functional flexibility at hinge regions and binding interfaces. An ensemble-refinement procedure is proposed to differentiate between variability that is a consequence of physical conformational heterogeneity and that which reflects uncertainty in the atomic coordinates

  7. Variable recruitment fluidic artificial muscles: modeling and experiments

    International Nuclear Information System (INIS)

    Bryant, Matthew; Meller, Michael A; Garcia, Ephrahim

    2014-01-01

    We investigate taking advantage of the lightweight, compliant nature of fluidic artificial muscles to create variable recruitment actuators in the form of artificial muscle bundles. Several actuator elements at different diameter scales are packaged to act as a single actuator device. The actuator elements of the bundle can be connected to the fluidic control circuit so that different groups of actuator elements, much like individual muscle fibers, can be activated independently depending on the required force output and motion. This novel actuation concept allows us to save energy by effectively impedance matching the active size of the actuators on the fly based on the instantaneous required load. This design also allows a single bundled actuator to operate in substantially different force regimes, which could be valuable for robots that need to perform a wide variety of tasks and interact safely with humans. This paper proposes, models and analyzes the actuation efficiency of this actuator concept. The analysis shows that variable recruitment operation can create an actuator that reduces throttling valve losses to operate more efficiently over a broader range of its force–strain operating space. We also present preliminary results of the design, fabrication and experimental characterization of three such bioinspired variable recruitment actuator prototypes. (paper)

  8. Using Random Forests to Select Optimal Input Variables for Short-Term Wind Speed Forecasting Models

    Directory of Open Access Journals (Sweden)

    Hui Wang

    2017-10-01

    Full Text Available Achieving relatively high-accuracy short-term wind speed forecasting estimates is a precondition for the construction and grid-connected operation of wind power forecasting systems for wind farms. Currently, most research is focused on the structure of forecasting models and does not consider the selection of input variables, which can have significant impacts on forecasting performance. This paper presents an input variable selection method for wind speed forecasting models. The candidate input variables for various leading periods are selected and random forests (RF is employed to evaluate the importance of all variable as features. The feature subset with the best evaluation performance is selected as the optimal feature set. Then, kernel-based extreme learning machine is constructed to evaluate the performance of input variables selection based on RF. The results of the case study show that by removing the uncorrelated and redundant features, RF effectively extracts the most strongly correlated set of features from the candidate input variables. By finding the optimal feature combination to represent the original information, RF simplifies the structure of the wind speed forecasting model, shortens the training time required, and substantially improves the model’s accuracy and generalization ability, demonstrating that the input variables selected by RF are effective.

  9. A calibration hierarchy for risk models was defined: from utopia to empirical data.

    Science.gov (United States)

    Van Calster, Ben; Nieboer, Daan; Vergouwe, Yvonne; De Cock, Bavo; Pencina, Michael J; Steyerberg, Ewout W

    2016-06-01

    Calibrated risk models are vital for valid decision support. We define four levels of calibration and describe implications for model development and external validation of predictions. We present results based on simulated data sets. A common definition of calibration is "having an event rate of R% among patients with a predicted risk of R%," which we refer to as "moderate calibration." Weaker forms of calibration only require the average predicted risk (mean calibration) or the average prediction effects (weak calibration) to be correct. "Strong calibration" requires that the event rate equals the predicted risk for every covariate pattern. This implies that the model is fully correct for the validation setting. We argue that this is unrealistic: the model type may be incorrect, the linear predictor is only asymptotically unbiased, and all nonlinear and interaction effects should be correctly modeled. In addition, we prove that moderate calibration guarantees nonharmful decision making. Finally, results indicate that a flexible assessment of calibration in small validation data sets is problematic. Strong calibration is desirable for individualized decision support but unrealistic and counter productive by stimulating the development of overly complex models. Model development and external validation should focus on moderate calibration. Copyright © 2016 Elsevier Inc. All rights reserved.

  10. A Variable Stiffness Analysis Model for Large Complex Thin-Walled Guide Rail

    Directory of Open Access Journals (Sweden)

    Wang Xiaolong

    2016-01-01

    Full Text Available Large complex thin-walled guide rail has complicated structure and no uniform low rigidity. The traditional cutting simulations are time consuming due to huge computation especially in large workpiece. To solve these problems, a more efficient variable stiffness analysis model has been propose, which can obtain quantitative stiffness value of the machining surface. Applying simulate cutting force in sampling points using finite element analysis software ABAQUS, the single direction variable stiffness rule can be obtained. The variable stiffness matrix has been propose by analyzing multi-directions coupling variable stiffness rule. Combining with the three direction cutting force value, the reasonability of existing processing parameters can be verified and the optimized cutting parameters can be designed.

  11. VITAMIN A DEFICIENCY IN BRAZILIAN CHILDREN AND ASSOCIATED VARIABLES.

    Science.gov (United States)

    Lima, Daniela Braga; Damiani, Lucas Petri; Fujimori, Elizabeth

    2018-03-29

    To analyze the variables associated with vitamin A deficiency (VAD) in Brazilian children aged 6 to 59 months, considering a hierarchical model of determination. This is part of the National Survey on Demography and Health of Women and Children, held in 2006. Data analysis included 3,417 children aged from six to 59 months with retinol data. Vitamin A deficiency was defined as serum retinol Poisson regression analysis were performed, with significance level set at 5%, using a hierarchical model of determination that considered three conglomerates of variables: those linked to the structural processes of community (socioeconomic-demographic variables); to the immediate environment of the child (maternal variables, safety and food consumption); and individual features (biological characteristics of the child). Data were expressed in prevalence ratio (PR). After adjustment for confounding variables, the following remained associated with VAD: living in the Southeast [PR=1,59; 95%CI 1,19-2,17] and Northeast [PR=1,56; 95%CI 1,16-2,15]; in urban area [RP=1,31; 95%CI 1,02-1,72]; and mother aged ≥36 years [RP=2,28; 95%CI 1,37-3,98], the consumption of meat at least once in the last seven days was a protective factor [PR=0,24; 95%CI 0,13-0,42]. The main variables associated with VAD in the country are related to structural processes of society and to the immediate, but not individual, environment of the child.

  12. Robust Model Predictive Control of a Nonlinear System with Known Scheduling Variable and Uncertain Gain

    DEFF Research Database (Denmark)

    Mirzaei, Mahmood; Poulsen, Niels Kjølstad; Niemann, Hans Henrik

    2012-01-01

    Robust model predictive control (RMPC) of a class of nonlinear systems is considered in this paper. We will use Linear Parameter Varying (LPV) model of the nonlinear system. By taking the advantage of having future values of the scheduling variable, we will simplify state prediction. Because...... of the special structure of the problem, uncertainty is only in the B matrix (gain) of the state space model. Therefore by taking advantage of this structure, we formulate a tractable minimax optimization problem to solve robust model predictive control problem. Wind turbine is chosen as the case study and we...... choose wind speed as the scheduling variable. Wind speed is measurable ahead of the turbine, therefore the scheduling variable is known for the entire prediction horizon....

  13. The use of vector bootstrapping to improve variable selection precision in Lasso models

    NARCIS (Netherlands)

    Laurin, C.; Boomsma, D.I.; Lubke, G.H.

    2016-01-01

    The Lasso is a shrinkage regression method that is widely used for variable selection in statistical genetics. Commonly, K-fold cross-validation is used to fit a Lasso model. This is sometimes followed by using bootstrap confidence intervals to improve precision in the resulting variable selections.

  14. A diffusion decision model analysis of evidence variability in the lexical decision task.

    Science.gov (United States)

    Tillman, Gabriel; Osth, Adam F; van Ravenzwaaij, Don; Heathcote, Andrew

    2017-12-01

    The lexical-decision task is among the most commonly used paradigms in psycholinguistics. In both the signal-detection theory and Diffusion Decision Model (DDM; Ratcliff, Gomez, & McKoon, Psychological Review, 111, 159-182, 2004) frameworks, lexical-decisions are based on a continuous source of word-likeness evidence for both words and non-words. The Retrieving Effectively from Memory model of Lexical-Decision (REM-LD; Wagenmakers et al., Cognitive Psychology, 48(3), 332-367, 2004) provides a comprehensive explanation of lexical-decision data and makes the prediction that word-likeness evidence is more variable for words than non-words and that higher frequency words are more variable than lower frequency words. To test these predictions, we analyzed five lexical-decision data sets with the DDM. For all data sets, drift-rate variability changed across word frequency and non-word conditions. For the most part, REM-LD's predictions about the ordering of evidence variability across stimuli in the lexical-decision task were confirmed.

  15. Kinetic Modeling of Corn Fermentation with S. cerevisiae Using a Variable Temperature Strategy

    Directory of Open Access Journals (Sweden)

    Augusto C. M. Souza

    2018-04-01

    Full Text Available While fermentation is usually done at a fixed temperature, in this study, the effect of having a controlled variable temperature was analyzed. A nonlinear system was used to model batch ethanol fermentation, using corn as substrate and the yeast Saccharomyces cerevisiae, at five different fixed and controlled variable temperatures. The lower temperatures presented higher ethanol yields but took a longer time to reach equilibrium. Higher temperatures had higher initial growth rates, but the decay of yeast cells was faster compared to the lower temperatures. However, in a controlled variable temperature model, the temperature decreased with time with the initial value of 40 ∘ C. When analyzing a time window of 60 h, the ethanol production increased 20% compared to the batch with the highest temperature; however, the yield was still 12% lower compared to the 20 ∘ C batch. When the 24 h’ simulation was analyzed, the controlled model had a higher ethanol concentration compared to both fixed temperature batches.

  16. Kinetic Modeling of Corn Fermentation with S. cerevisiae Using a Variable Temperature Strategy.

    Science.gov (United States)

    Souza, Augusto C M; Mousaviraad, Mohammad; Mapoka, Kenneth O M; Rosentrater, Kurt A

    2018-04-24

    While fermentation is usually done at a fixed temperature, in this study, the effect of having a controlled variable temperature was analyzed. A nonlinear system was used to model batch ethanol fermentation, using corn as substrate and the yeast Saccharomyces cerevisiae , at five different fixed and controlled variable temperatures. The lower temperatures presented higher ethanol yields but took a longer time to reach equilibrium. Higher temperatures had higher initial growth rates, but the decay of yeast cells was faster compared to the lower temperatures. However, in a controlled variable temperature model, the temperature decreased with time with the initial value of 40 ∘ C. When analyzing a time window of 60 h, the ethanol production increased 20% compared to the batch with the highest temperature; however, the yield was still 12% lower compared to the 20 ∘ C batch. When the 24 h’ simulation was analyzed, the controlled model had a higher ethanol concentration compared to both fixed temperature batches.

  17. Defining Tiger Parenting in Chinese Americans.

    Science.gov (United States)

    Kim, Su Yeong

    2013-09-01

    "Tiger" parenting, as described by Amy Chua [2011], has instigated scholarly discourse on this phenomenon and its possible effects on families. Our eight-year longitudinal study, published in the Asian American Journal of Psychology [Kim, Wang, Orozco-Lapray, Shen, & Murtuza, 2013b], demonstrates that tiger parenting is not a common parenting profile in a sample of 444 Chinese American families. Tiger parenting also does not relate to superior academic performance in children. In fact, the best developmental outcomes were found among children of supportive parents. We examine the complexities around defining tiger parenting by reviewing classical literature on parenting styles and scholarship on Asian American parenting, along with Amy Chua's own description of her parenting method, to develop, define, and categorize variability in parenting in a sample of Chinese American families. We also provide evidence that supportive parenting is important for the optimal development of Chinese American adolescents.

  18. An accurate fatigue damage model for welded joints subjected to variable amplitude loading

    Science.gov (United States)

    Aeran, A.; Siriwardane, S. C.; Mikkelsen, O.; Langen, I.

    2017-12-01

    Researchers in the past have proposed several fatigue damage models to overcome the shortcomings of the commonly used Miner’s rule. However, requirements of material parameters or S-N curve modifications restricts their practical applications. Also, application of most of these models under variable amplitude loading conditions have not been found. To overcome these restrictions, a new fatigue damage model is proposed in this paper. The proposed model can be applied by practicing engineers using only the S-N curve given in the standard codes of practice. The model is verified with experimentally derived damage evolution curves for C 45 and 16 Mn and gives better agreement compared to previous models. The model predicted fatigue lives are also in better correlation with experimental results compared to previous models as shown in earlier published work by the authors. The proposed model is applied to welded joints subjected to variable amplitude loadings in this paper. The model given around 8% shorter fatigue lives compared to Eurocode given Miner’s rule. This shows the importance of applying accurate fatigue damage models for welded joints.

  19. FinFET centric variability-aware compact model extraction and generation technology supporting DTCO

    OpenAIRE

    Wang, Xingsheng; Cheng, Binjie; Reid, David; Pender, Andrew; Asenov, Plamen; Millar, Campbell; Asenov, Asen

    2015-01-01

    In this paper, we present a FinFET-focused variability-aware compact model (CM) extraction and generation technology supporting design-technology co-optimization. The 14-nm CMOS technology generation silicon on insulator FinFETs are used as testbed transistors to illustrate our approach. The TCAD simulations include a long-range process-induced variability using a design of experiment approach and short-range purely statistical variability (mismatch). The CM extraction supports a hierarchical...

  20. Two-Layer Variable Infiltration Capacity Land Surface Representation for General Circulation Models

    Science.gov (United States)

    Xu, L.

    1994-01-01

    A simple two-layer variable infiltration capacity (VIC-2L) land surface model suitable for incorporation in general circulation models (GCMs) is described. The model consists of a two-layer characterization of the soil within a GCM grid cell, and uses an aerodynamic representation of latent and sensible heat fluxes at the land surface. The effects of GCM spatial subgrid variability of soil moisture and a hydrologically realistic runoff mechanism are represented in the soil layers. The model was tested using long-term hydrologic and climatalogical data for Kings Creek, Kansas to estimate and validate the hydrological parameters. Surface flux data from three First International Satellite Land Surface Climatology Project Field Experiments (FIFE) intensive field compaigns in the summer and fall of 1987 in central Kansas, and from the Anglo-Brazilian Amazonian Climate Observation Study (ABRACOS) in Brazil were used to validate the mode-simulated surface energy fluxes and surface temperature.

  1. MODELING OF RELATIONSHIP BETWEEN GROUNDWATER FLOW AND OTHER METEOROLOGICAL VARIABLES USING FUZZY LOGIC

    Directory of Open Access Journals (Sweden)

    Şaban YURTÇU

    2006-02-01

    Full Text Available In this study, modeling of the effect of rainfall, flow and evaporation as independent variables on the change of underground water levels as dependent variables were investigated by fuzzy logic (FL. In the study, total 396 values taken from six observation stations belong to Afyon inferior basin in Akarçay from 1977 to 1989 years were used. Using the monthly average values of stations, the change of underground water level was modeled by FL. It is observed that the results obtained from FL and the observations are compatible with each other. This shows FL modeling can be used to estimate groundwater levels from the appropriate meteorological value.

  2. An oilspill trajectory analysis model with a variable wind deflection angle

    Science.gov (United States)

    Samuels, W.B.; Huang, N.E.; Amstutz, D.E.

    1982-01-01

    The oilspill trajectory movement algorithm consists of a vector sum of the surface drift component due to wind and the surface current component. In the U.S. Geological Survey oilspill trajectory analysis model, the surface drift component is assumed to be 3.5% of the wind speed and is rotated 20 degrees clockwise to account for Coriolis effects in the Northern Hemisphere. Field and laboratory data suggest, however, that the deflection angle of the surface drift current can be highly variable. An empirical formula, based on field observations and theoretical arguments relating wind speed to deflection angle, was used to calculate a new deflection angle at each time step in the model. Comparisons of oilspill contact probabilities to coastal areas calculated for constant and variable deflection angles showed that the model is insensitive to this changing angle at low wind speeds. At high wind speeds, some statistically significant differences in contact probabilities did appear. ?? 1982.

  3. Developing a stochastic parameterization to incorporate plant trait variability into ecohydrologic modeling

    Science.gov (United States)

    Liu, S.; Ng, G. H. C.

    2017-12-01

    The global plant database has revealed that plant traits can vary more within a plant functional type (PFT) than among different PFTs, indicating that the current paradigm in ecohydrogical models of specifying fixed parameters based solely on plant functional type (PFT) could potentially bias simulations. Although some recent modeling studies have attempted to incorporate this observed plant trait variability, many failed to consider uncertainties due to sparse global observation, or they omitted spatial and/or temporal variability in the traits. Here we present a stochastic parameterization for prognostic vegetation simulations that are stochastic in time and space in order to represent plant trait plasticity - the process by which trait differences arise. We have developed the new PFT parameterization within the Community Land Model 4.5 (CLM 4.5) and tested the method for a desert shrubland watershed in the Mojave Desert, where fixed parameterizations cannot represent acclimation to desert conditions. Spatiotemporally correlated plant trait parameters were first generated based on TRY statistics and were then used to implement ensemble runs for the study area. The new PFT parameterization was then further conditioned on field measurements of soil moisture and remotely sensed observations of leaf-area-index to constrain uncertainties in the sparse global database. Our preliminary results show that incorporating data-conditioned, variable PFT parameterizations strongly affects simulated soil moisture and water fluxes, compared with default simulations. The results also provide new insights about correlations among plant trait parameters and between traits and environmental conditions in the desert shrubland watershed. Our proposed stochastic PFT parameterization method for ecohydrological models has great potential in advancing our understanding of how terrestrial ecosystems are predicted to adapt to variable environmental conditions.

  4. College quality and hourly wages: evidence from the self-revelation model, sibling models and instrumental variables.

    Science.gov (United States)

    Borgen, Nicolai T

    2014-11-01

    This paper addresses the recent discussion on confounding in the returns to college quality literature using the Norwegian case. The main advantage of studying Norway is the quality of the data. Norwegian administrative data provide information on college applications, family relations and a rich set of control variables for all Norwegian citizens applying to college between 1997 and 2004 (N = 141,319) and their succeeding wages between 2003 and 2010 (676,079 person-year observations). With these data, this paper uses a subset of the models that have rendered mixed findings in the literature in order to investigate to what extent confounding biases the returns to college quality. I compare estimates obtained using standard regression models to estimates obtained using the self-revelation model of Dale and Krueger (2002), a sibling fixed effects model and the instrumental variable model used by Long (2008). Using these methods, I consistently find increasing returns to college quality over the course of students' work careers, with positive returns only later in students' work careers. I conclude that the standard regression estimate provides a reasonable estimate of the returns to college quality. Copyright © 2014 Elsevier Inc. All rights reserved.

  5. A method for defining value in healthcare using cancer care as a model.

    Science.gov (United States)

    Feeley, Thomas W; Fly, Helen Shafer; Albright, Heidi; Walters, Ronald; Burke, Thomas W

    2010-01-01

    Value-based healthcare delivery is being discussed in a variety of healthcare forums. This concept is of great importance in the reform of the US healthcare delivery system. Defining and applying the principles of value-based competition in healthcare delivery models will permit future evaluation of various delivery applications. However, there are relatively few examples of how to apply these principles to an existing care delivery system. In this article, we describe an approach for assessing the value created when treating cancer patients in a multidisciplinary care setting within a comprehensive cancer center. We describe the analysis of a multidisciplinary care center that treats head and neck cancers, and we attempt to examine how this center integrates with Porter and Teisberg's (2006) concept of value-based competition based on the results analysis. Using the relationship between outcomes and costs as the definition of value, we developed a methodology to analyze proposed outcomes for a population of patients treated using a multidisciplinary approach, and we matched those outcomes to the costs of the care provided. We present this work as a model for defining value for a subset of patients undergoing active treatment. The method can be applied not only to head and neck treatments, but to other modalities as well. Public reporting of this type of data for a variety of conditions can lead to improved competition in the healthcare marketplace and, as a result, improve outcomes and decrease health expenditures.

  6. BN-FLEMOps pluvial - A probabilistic multi-variable loss estimation model for pluvial floods

    Science.gov (United States)

    Roezer, V.; Kreibich, H.; Schroeter, K.; Doss-Gollin, J.; Lall, U.; Merz, B.

    2017-12-01

    Pluvial flood events, such as in Copenhagen (Denmark) in 2011, Beijing (China) in 2012 or Houston (USA) in 2016, have caused severe losses to urban dwellings in recent years. These floods are caused by storm events with high rainfall rates well above the design levels of urban drainage systems, which lead to inundation of streets and buildings. A projected increase in frequency and intensity of heavy rainfall events in many areas and an ongoing urbanization may increase pluvial flood losses in the future. For an efficient risk assessment and adaptation to pluvial floods, a quantification of the flood risk is needed. Few loss models have been developed particularly for pluvial floods. These models usually use simple waterlevel- or rainfall-loss functions and come with very high uncertainties. To account for these uncertainties and improve the loss estimation, we present a probabilistic multi-variable loss estimation model for pluvial floods based on empirical data. The model was developed in a two-step process using a machine learning approach and a comprehensive database comprising 783 records of direct building and content damage of private households. The data was gathered through surveys after four different pluvial flood events in Germany between 2005 and 2014. In a first step, linear and non-linear machine learning algorithms, such as tree-based and penalized regression models were used to identify the most important loss influencing factors among a set of 55 candidate variables. These variables comprise hydrological and hydraulic aspects, early warning, precaution, building characteristics and the socio-economic status of the household. In a second step, the most important loss influencing variables were used to derive a probabilistic multi-variable pluvial flood loss estimation model based on Bayesian Networks. Two different networks were tested: a score-based network learned from the data and a network based on expert knowledge. Loss predictions are made

  7. Variable thickness transient ground-water flow model. Volume 1. Formulation

    International Nuclear Information System (INIS)

    Reisenauer, A.E.

    1979-12-01

    Mathematical formulation for the variable thickness transient (VTT) model of an aquifer system is presented. The basic assumptions are described. Specific data requirements for the physical parameters are discussed. The boundary definitions and solution techniques of the numerical formulation of the system of equations are presented

  8. GY SAMPLING THEORY AND GEOSTATISTICS: ALTERNATE MODELS OF VARIABILITY IN CONTINUOUS MEDIA

    Science.gov (United States)

    In the sampling theory developed by Pierre Gy, sample variability is modeled as the sum of a set of seven discrete error components. The variogram used in geostatisties provides an alternate model in which several of Gy's error components are combined in a continuous mode...

  9. Relations between water physico-chemistry and benthic algal communities in a northern Canadian watershed: defining reference conditions using multiple descriptors of community structure.

    Science.gov (United States)

    Thomas, Kathryn E; Hall, Roland I; Scrimgeour, Garry J

    2015-09-01

    Defining reference conditions is central to identifying environmental effects of anthropogenic activities. Using a watershed approach, we quantified reference conditions for benthic algal communities and their relations to physico-chemical conditions in rivers in the South Nahanni River watershed, NWT, Canada, in 2008 and 2009. We also compared the ability of three descriptors that vary in terms of analytical costs to define algal community structure based on relative abundances of (i) all algal taxa, (ii) only diatom taxa, and (iii) photosynthetic pigments. Ordination analyses showed that variance in algal community structure was strongly related to gradients in environmental variables describing water physico-chemistry, stream habitats, and sub-watershed structure. Water physico-chemistry and local watershed-scale descriptors differed significantly between algal communities from sites in the Selwyn Mountain ecoregion compared to sites in the Nahanni-Hyland ecoregions. Distinct differences in algal community types between ecoregions were apparent irrespective of whether algal community structure was defined using all algal taxa, diatom taxa, or photosynthetic pigments. Two algal community types were highly predictable using environmental variables, a core consideration in the development of Reference Condition Approach (RCA) models. These results suggest that assessments of environmental impacts could be completed using RCA models for each ecoregion. We suggest that use of algal pigments, a high through-put analysis, is a promising alternative compared to more labor-intensive and costly taxonomic approaches for defining algal community structure.

  10. Beyond a climate-centric view of plant distribution: edaphic variables add value to distribution models.

    Directory of Open Access Journals (Sweden)

    Frieda Beauregard

    Full Text Available Both climatic and edaphic conditions determine plant distribution, however many species distribution models do not include edaphic variables especially over large geographical extent. Using an exceptional database of vegetation plots (n = 4839 covering an extent of ∼55,000 km2, we tested whether the inclusion of fine scale edaphic variables would improve model predictions of plant distribution compared to models using only climate predictors. We also tested how well these edaphic variables could predict distribution on their own, to evaluate the assumption that at large extents, distribution is governed largely by climate. We also hypothesized that the relative contribution of edaphic and climatic data would vary among species depending on their growth forms and biogeographical attributes within the study area. We modelled 128 native plant species from diverse taxa using four statistical model types and three sets of abiotic predictors: climate, edaphic, and edaphic-climate. Model predictive accuracy and variable importance were compared among these models and for species' characteristics describing growth form, range boundaries within the study area, and prevalence. For many species both the climate-only and edaphic-only models performed well, however the edaphic-climate models generally performed best. The three sets of predictors differed in the spatial information provided about habitat suitability, with climate models able to distinguish range edges, but edaphic models able to better distinguish within-range variation. Model predictive accuracy was generally lower for species without a range boundary within the study area and for common species, but these effects were buffered by including both edaphic and climatic predictors. The relative importance of edaphic and climatic variables varied with growth forms, with trees being more related to climate whereas lower growth forms were more related to edaphic conditions. Our study

  11. Studies and research concerning BNFP. Identification and simplified modeling of economically important radwaste variables

    International Nuclear Information System (INIS)

    Ebel, P.E.; Godfrey, W.L.; Henry, J.L.; Postles, R.L.

    1983-09-01

    An extensive computer model describing the mass balance and economic characteristics of radioactive waste disposal systems was exercised in a series of runs designed using linear statistical methods. The most economically important variables were identified, their behavior characterized, and a simplified computer model prepared which runs on desk-top minicomputers. This simplified model allows the investigation of the effects of the seven most significant variables in each of four waste areas: Liquid Waste Storage, Liquid Waste Solidification, General Process Trash Handling, and Hulls Handling. 8 references, 1 figure, 12 tables

  12. Constrained variability of modeled T:ET ratio across biomes

    Science.gov (United States)

    Fatichi, Simone; Pappas, Christoforos

    2017-07-01

    A large variability (35-90%) in the ratio of transpiration to total evapotranspiration (referred here as T:ET) across biomes or even at the global scale has been documented by a number of studies carried out with different methodologies. Previous empirical results also suggest that T:ET does not covary with mean precipitation and has a positive dependence on leaf area index (LAI). Here we use a mechanistic ecohydrological model, with a refined process-based description of evaporation from the soil surface, to investigate the variability of T:ET across biomes. Numerical results reveal a more constrained range and higher mean of T:ET (70 ± 9%, mean ± standard deviation) when compared to observation-based estimates. T:ET is confirmed to be independent from mean precipitation, while it is found to be correlated with LAI seasonally but uncorrelated across multiple sites. Larger LAI increases evaporation from interception but diminishes ground evaporation with the two effects largely compensating each other. These results offer mechanistic model-based evidence to the ongoing research about the patterns of T:ET and the factors influencing its magnitude across biomes.

  13. What Makes Hydrologic Models Differ? Using SUMMA to Systematically Explore Model Uncertainty and Error

    Science.gov (United States)

    Bennett, A.; Nijssen, B.; Chegwidden, O.; Wood, A.; Clark, M. P.

    2017-12-01

    Model intercomparison experiments have been conducted to quantify the variability introduced during the model development process, but have had limited success in identifying the sources of this model variability. The Structure for Unifying Multiple Modeling Alternatives (SUMMA) has been developed as a framework which defines a general set of conservation equations for mass and energy as well as a common core of numerical solvers along with the ability to set options for choosing between different spatial discretizations and flux parameterizations. SUMMA can be thought of as a framework for implementing meta-models which allows for the investigation of the impacts of decisions made during the model development process. Through this flexibility we develop a hierarchy of definitions which allows for models to be compared to one another. This vocabulary allows us to define the notion of weak equivalence between model instantiations. Through this weak equivalence we develop the concept of model mimicry, which can be used to investigate the introduction of uncertainty and error during the modeling process as well as provide a framework for identifying modeling decisions which may complement or negate one another. We instantiate SUMMA instances that mimic the behaviors of the Variable Infiltration Capacity (VIC) model and the Precipitation Runoff Modeling System (PRMS) by choosing modeling decisions which are implemented in each model. We compare runs from these models and their corresponding mimics across the Columbia River Basin located in the Pacific Northwest of the United States and Canada. From these comparisons, we are able to determine the extent to which model implementation has an effect on the results, as well as determine the changes in sensitivity of parameters due to these implementation differences. By examining these changes in results and sensitivities we can attempt to postulate changes in the modeling decisions which may provide better estimation of

  14. PATH ANALYSIS WITH LOGISTIC REGRESSION MODELS : EFFECT ANALYSIS OF FULLY RECURSIVE CAUSAL SYSTEMS OF CATEGORICAL VARIABLES

    OpenAIRE

    Nobuoki, Eshima; Minoru, Tabata; Geng, Zhi; Department of Medical Information Analysis, Faculty of Medicine, Oita Medical University; Department of Applied Mathematics, Faculty of Engineering, Kobe University; Department of Probability and Statistics, Peking University

    2001-01-01

    This paper discusses path analysis of categorical variables with logistic regression models. The total, direct and indirect effects in fully recursive causal systems are considered by using model parameters. These effects can be explained in terms of log odds ratios, uncertainty differences, and an inner product of explanatory variables and a response variable. A study on food choice of alligators as a numerical exampleis reanalysed to illustrate the present approach.

  15. On the intra-seasonal variability within the extratropics in the ECHAM3 general circulation model

    International Nuclear Information System (INIS)

    May, W.

    1994-01-01

    First we consider the GCM's capability to reproduce the midlatitude variability on intra-seasonal time scales by a comparison with observational data (ECMWF analyses). Secondly we assess the possible influence of Sea Surface Temperatures on the intra-seasonal variability by comparing estimates obtained from different simulations performed with ECHAM3 with varying and fixed SST as boundary forcing. The intra-seasonal variability as simulated by ECHAM3 is underestimated over most of the Northern Hemisphere. While the contributions of the high-frequency transient fluctuations are reasonably well captured by the model, ECHAM3 fails to reproduce the observed level of low-frequency intra-seasonal variability. This is mainly due to the underestimation of the variability caused by the ultra-long planetary waves in the Northern Hemisphere midlatitudes by the model. In the Southern Hemisphere midlatitudes, on the other hand, the intra-seasonal variability as simulated by ECHAM3 is generally underestimated in the area north of about 50 southern latitude, but overestimated at higher latitudes. This is the case for the contributions of the high-frequency and the low-frequency transient fluctuations as well. Further, the model indicates a strong tendency for zonal symmetry, in particular with respect to the high-frequency transient fluctuations. While the two sets of simulations with varying and fixed Sea Surface Temepratures as boundary forcing reveal only small regional differences in the Southern Hemisphere, there is a strong response to be found in the Northern Hemisphere. The contributions of the high-frequency transient fluctuations to the intra-seasonal variability are generally stronger in the simulations with fixed SST. Further, the Pacific storm track is shifted slightly poleward in this set of simulations. For the low-frequency intra-seasonal variability the model gives a strong, but regional response to the interannual variations of the SST. (orig.)

  16. Study of solar radiation prediction and modeling of relationships between solar radiation and meteorological variables

    International Nuclear Information System (INIS)

    Sun, Huaiwei; Zhao, Na; Zeng, Xiaofan; Yan, Dong

    2015-01-01

    Highlights: • We investigate relationships between solar radiation and meteorological variables. • A strong relationship exists between solar radiation and sunshine duration. • Daily global radiation can be estimated accurately with ARMAX–GARCH models. • MGARCH model was applied to investigate time-varying relationships. - Abstract: The traditional approaches that employ the correlations between solar radiation and other measured meteorological variables are commonly utilized in studies. It is important to investigate the time-varying relationships between meteorological variables and solar radiation to determine which variables have the strongest correlations with solar radiation. In this study, the nonlinear autoregressive moving average with exogenous variable–generalized autoregressive conditional heteroscedasticity (ARMAX–GARCH) and multivariate GARCH (MGARCH) time-series approaches were applied to investigate the associations between solar radiation and several meteorological variables. For these investigations, the long-term daily global solar radiation series measured at three stations from January 1, 2004 until December 31, 2007 were used in this study. Stronger relationships were observed to exist between global solar radiation and sunshine duration than between solar radiation and temperature difference. The results show that 82–88% of the temporal variations of the global solar radiation were captured by the sunshine-duration-based ARMAX–GARCH models and 55–68% of daily variations were captured by the temperature-difference-based ARMAX–GARCH models. The advantages of the ARMAX–GARCH models were also confirmed by comparison of Auto-Regressive and Moving Average (ARMA) and neutral network (ANN) models in the estimation of daily global solar radiation. The strong heteroscedastic persistency of the global solar radiation series was revealed by the AutoRegressive Conditional Heteroscedasticity (ARCH) and Generalized Auto

  17. Confidence Intervals for a Semiparametric Approach to Modeling Nonlinear Relations among Latent Variables

    Science.gov (United States)

    Pek, Jolynn; Losardo, Diane; Bauer, Daniel J.

    2011-01-01

    Compared to parametric models, nonparametric and semiparametric approaches to modeling nonlinearity between latent variables have the advantage of recovering global relationships of unknown functional form. Bauer (2005) proposed an indirect application of finite mixtures of structural equation models where latent components are estimated in the…

  18. A new approach to hazardous materials transportation risk analysis: decision modeling to identify critical variables.

    Science.gov (United States)

    Clark, Renee M; Besterfield-Sacre, Mary E

    2009-03-01

    We take a novel approach to analyzing hazardous materials transportation risk in this research. Previous studies analyzed this risk from an operations research (OR) or quantitative risk assessment (QRA) perspective by minimizing or calculating risk along a transport route. Further, even though the majority of incidents occur when containers are unloaded, the research has not focused on transportation-related activities, including container loading and unloading. In this work, we developed a decision model of a hazardous materials release during unloading using actual data and an exploratory data modeling approach. Previous studies have had a theoretical perspective in terms of identifying and advancing the key variables related to this risk, and there has not been a focus on probability and statistics-based approaches for doing this. Our decision model empirically identifies the critical variables using an exploratory methodology for a large, highly categorical database involving latent class analysis (LCA), loglinear modeling, and Bayesian networking. Our model identified the most influential variables and countermeasures for two consequences of a hazmat incident, dollar loss and release quantity, and is one of the first models to do this. The most influential variables were found to be related to the failure of the container. In addition to analyzing hazmat risk, our methodology can be used to develop data-driven models for strategic decision making in other domains involving risk.

  19. The Matrix model, a driven state variables approach to non-equilibrium thermodynamics

    NARCIS (Netherlands)

    Jongschaap, R.J.J.

    2001-01-01

    One of the new approaches in non-equilibrium thermodynamics is the so-called matrix model of Jongschaap. In this paper some features of this model are discussed. We indicate the differences with the more common approach based upon internal variables and the more sophisticated Hamiltonian and GENERIC

  20. Modelling and prediction of pig iron variables in the blast furnace

    Energy Technology Data Exchange (ETDEWEB)

    Saxen, H.; Laaksonen, M.; Waller, M. [Aabo Akademi, Turku (Finland). Heat Engineering Lab.

    1996-12-31

    The blast furnace, where pig iron for steelmaking is produced, is an extremely complicated process, with heat and mass transfer and chemical reactions between several phases. Very few direct measurements on the internal state are available in the operation of the process. A main problem in on-line analysis and modelling is that the state of the furnace may undergo spontaneous changes, which alter the dynamic behaviour of the process. Moreover, large internal disturbances frequently occur, which affect the product quality. The work in this research project focuses on a central problem in the control of the blast furnace process, i.e., short-term prediction of pig iron variables. The problem is of considerable importance for fuel economy, product quality, and for an optimal decision making in integrated steel plants. The operation of the blast furnace aims at producing a product (hot metal) with variables maintained on a stable level (close to their setpoints) without waste of expensive fuel (metallurgical coke). The hot metal temperature and composition affect the downstream (steelmaking) processes, so fluctuations in the pig iron quality must be `corrected` in the steel plant. The goal is to develop a system which predicts the evolution of the hot metal variables (temperature, chemical composition) during the next few taps, and that can be used for decision-making in the operation of the blast furnace. Because of the complicated behaviour of the process, it is considered important to include both deterministic and stochastic components in the modelling: Mathematical models, which on the basis of measurements describe the physical state of the process, and statistical (black-box) models will be combined in the system. Moreover, different models will be applied in different domains in order to capture structural changes in the dynamics of the process SULA 2 Research Programme; 17 refs.

  1. Modelling and prediction of pig iron variables in the blast furnace

    Energy Technology Data Exchange (ETDEWEB)

    Saxen, H; Laaksonen, M; Waller, M [Aabo Akademi, Turku (Finland). Heat Engineering Lab.

    1997-12-31

    The blast furnace, where pig iron for steelmaking is produced, is an extremely complicated process, with heat and mass transfer and chemical reactions between several phases. Very few direct measurements on the internal state are available in the operation of the process. A main problem in on-line analysis and modelling is that the state of the furnace may undergo spontaneous changes, which alter the dynamic behaviour of the process. Moreover, large internal disturbances frequently occur, which affect the product quality. The work in this research project focuses on a central problem in the control of the blast furnace process, i.e., short-term prediction of pig iron variables. The problem is of considerable importance for fuel economy, product quality, and for an optimal decision making in integrated steel plants. The operation of the blast furnace aims at producing a product (hot metal) with variables maintained on a stable level (close to their setpoints) without waste of expensive fuel (metallurgical coke). The hot metal temperature and composition affect the downstream (steelmaking) processes, so fluctuations in the pig iron quality must be `corrected` in the steel plant. The goal is to develop a system which predicts the evolution of the hot metal variables (temperature, chemical composition) during the next few taps, and that can be used for decision-making in the operation of the blast furnace. Because of the complicated behaviour of the process, it is considered important to include both deterministic and stochastic components in the modelling: Mathematical models, which on the basis of measurements describe the physical state of the process, and statistical (black-box) models will be combined in the system. Moreover, different models will be applied in different domains in order to capture structural changes in the dynamics of the process SULA 2 Research Programme; 17 refs.

  2. An introduction to latent variable growth curve modeling concepts, issues, and application

    CERN Document Server

    Duncan, Terry E; Strycker, Lisa A

    2013-01-01

    This book provides a comprehensive introduction to latent variable growth curve modeling (LGM) for analyzing repeated measures. It presents the statistical basis for LGM and its various methodological extensions, including a number of practical examples of its use. It is designed to take advantage of the reader's familiarity with analysis of variance and structural equation modeling (SEM) in introducing LGM techniques. Sample data, syntax, input and output, are provided for EQS, Amos, LISREL, and Mplus on the book's CD. Throughout the book, the authors present a variety of LGM techniques that are useful for many different research designs, and numerous figures provide helpful diagrams of the examples.Updated throughout, the second edition features three new chapters-growth modeling with ordered categorical variables, growth mixture modeling, and pooled interrupted time series LGM approaches. Following a new organization, the book now covers the development of the LGM, followed by chapters on multiple-group is...

  3. Thermodynamic consistency of viscoplastic material models involving external variable rates in the evolution equations for the internal variables

    International Nuclear Information System (INIS)

    Malmberg, T.

    1993-09-01

    The objective of this study is to derive and investigate thermodynamic restrictions for a particular class of internal variable models. Their evolution equations consist of two contributions: the usual irreversible part, depending only on the present state, and a reversible but path dependent part, linear in the rates of the external variables (evolution equations of ''mixed type''). In the first instance the thermodynamic analysis is based on the classical Clausius-Duhem entropy inequality and the Coleman-Noll argument. The analysis is restricted to infinitesimal strains and rotations. The results are specialized and transferred to a general class of elastic-viscoplastic material models. Subsequently, they are applied to several viscoplastic models of ''mixed type'', proposed or discussed in the literature (Robinson et al., Krempl et al., Freed et al.), and it is shown that some of these models are thermodynamically inconsistent. The study is closed with the evaluation of the extended Clausius-Duhem entropy inequality (concept of Mueller) where the entropy flux is governed by an assumed constitutive equation in its own right; also the constraining balance equations are explicitly accounted for by the method of Lagrange multipliers (Liu's approach). This analysis is done for a viscoplastic material model with evolution equations of the ''mixed type''. It is shown that this approach is much more involved than the evaluation of the classical Clausius-Duhem entropy inequality with the Coleman-Noll argument. (orig.) [de

  4. [Modelling the effect of local climatic variability on dengue transmission in Medellin (Colombia) by means of time series analysis].

    Science.gov (United States)

    Rúa-Uribe, Guillermo L; Suárez-Acosta, Carolina; Chauca, José; Ventosilla, Palmira; Almanza, Rita

    2013-09-01

    Dengue fever is a major impact on public health vector-borne disease, and its transmission is influenced by entomological, sociocultural and economic factors. Additionally, climate variability plays an important role in the transmission dynamics. A large scientific consensus has indicated that the strong association between climatic variables and disease could be used to develop models to explain the incidence of the disease. To develop a model that provides a better understanding of dengue transmission dynamics in Medellin and predicts increases in the incidence of the disease. The incidence of dengue fever was used as dependent variable, and weekly climatic factors (maximum, mean and minimum temperature, relative humidity and precipitation) as independent variables. Expert Modeler was used to develop a model to better explain the behavior of the disease. Climatic variables with significant association to the dependent variable were selected through ARIMA models. The model explains 34% of observed variability. Precipitation was the climatic variable showing statistically significant association with the incidence of dengue fever, but with a 20 weeks delay. In Medellin, the transmission of dengue fever was influenced by climate variability, especially precipitation. The strong association dengue fever/precipitation allowed the construction of a model to help understand dengue transmission dynamics. This information will be useful to develop appropriate and timely strategies for dengue control.

  5. Comparison of elastic--plastic and variable modulus-cracking constitutive models for prestressed concrete reactor vessels

    International Nuclear Information System (INIS)

    Anderson, C.A.; Smith, P.D.

    1978-01-01

    The variable modulus-cracking model is capable of predicting the behavior of reinforced concrete structures (such as the reinforced plate under transverse pressure described previously) well into the range of nonlinear behavior including the prediction of the ultimate load. For unreinforced thick-walled concrete vessels under internal pressure the use of elastic--plastic concrete models in finite element codes enhances the apparent ductility of the vessels in contrast to variable modulus-cracking models that predict nearly instantaneous rupture whenever the tensile strength at the inner wall is exceeded. For unreinforced thick-walled end slabs representative of PCRV heads, the behavior predicted by finite element codes using variable modulus-cracking models is much stiffer in the nonlinear range than that observed experimentally. Although the shear type failures and crack patterns that are observed experimentally are predicted by such concrete models, the ultimate load carrying capacity and vessel-ductility are significantly underestimated. It appears that such models do not adequately model such features as aggregate interlock that could lead to an enhanced vessel reserve strength and ductility

  6. Measuring the surgical 'learning curve': methods, variables and competency.

    Science.gov (United States)

    Khan, Nuzhath; Abboudi, Hamid; Khan, Mohammed Shamim; Dasgupta, Prokar; Ahmed, Kamran

    2014-03-01

    To describe how learning curves are measured and what procedural variables are used to establish a 'learning curve' (LC). To assess whether LCs are a valuable measure of competency. A review of the surgical literature pertaining to LCs was conducted using the Medline and OVID databases. Variables should be fully defined and when possible, patient-specific variables should be used. Trainee's prior experience and level of supervision should be quantified; the case mix and complexity should ideally be constant. Logistic regression may be used to control for confounding variables. Ideally, a learning plateau should reach a predefined/expert-derived competency level, which should be fully defined. When the group splitting method is used, smaller cohorts should be used in order to narrow the range of the LC. Simulation technology and competence-based objective assessments may be used in training and assessment in LC studies. Measuring the surgical LC has potential benefits for patient safety and surgical education. However, standardisation in the methods and variables used to measure LCs is required. Confounding variables, such as participant's prior experience, case mix, difficulty of procedures and level of supervision, should be controlled. Competency and expert performance should be fully defined. © 2013 The Authors. BJU International © 2013 BJU International.

  7. Variable-Resolution Ensemble Climatology Modeling of Sierra Nevada Snowpack within the Community Earth System Model (CESM)

    Science.gov (United States)

    Rhoades, A.; Ullrich, P. A.; Zarzycki, C. M.; Levy, M.; Taylor, M.

    2014-12-01

    Snowpack is crucial for the western USA, providing around 75% of the total fresh water supply (Cayan et al., 1996) and buffering against seasonal aridity impacts on agricultural, ecosystem, and urban water demands. The resilience of the California water system is largely dependent on natural stores provided by snowpack. This resilience has shown vulnerabilities due to anthropogenic global climate change. Historically, the northern Sierras showed a net decline of 50-75% in snow water equivalent (SWE) while the southern Sierras showed a net accumulation of 30% (Mote et al., 2005). Future trends of SWE highlight that western USA SWE may decline by 40-70% (Pierce and Cayan, 2013), snowfall may decrease by 25-40% (Pierce and Cayan, 2013), and more winter storms may tend towards rain rather than snow (Bales et al., 2006). The volatility of Sierran snowpack presents a need for scientific tools to help water managers and policy makers assess current and future trends. A burgeoning tool to analyze these trends comes in the form of variable-resolution global climate modeling (VRGCM). VRGCMs serve as a bridge between regional and global models and provide added resolution in areas of need, eliminate lateral boundary forcings, provide model runtime speed up, and utilize a common dynamical core, physics scheme and sub-grid scale parameterization package. A cubed-sphere variable-resolution grid with 25 km horizontal resolution over the western USA was developed for use in the Community Atmosphere Model (CAM) within the Community Earth System Model (CESM). A 25-year three-member ensemble climatology (1980-2005) is presented and major snowpack metrics such as SWE, snow depth, snow cover, and two-meter surface temperature are assessed. The ensemble simulation is also compared to observational, reanalysis, and WRF model datasets. The variable-resolution model provides a mechanism for reaching towards non-hydrostatic scales and simulations are currently being developed with refined

  8. Modelling accuracy and variability of motor timing in treated and untreated Parkinson’s disease and healthy controls

    Directory of Open Access Journals (Sweden)

    Catherine Rhian Gwyn Jones

    2011-12-01

    Full Text Available Parkinson’s disease (PD is characterised by difficulty with the timing of movements. Data collected using the synchronization-continuation paradigm, an established motor timing paradigm, have produced varying results but with most studies finding impairment. Some of this inconsistency comes from variation in the medication state tested, in the inter-stimulus intervals (ISI selected, and in changeable focus on either the synchronization (tapping in time with a tone or continuation (maintaining the rhythm in the absence of the tone phase. We sought to re-visit the paradigm by testing across four groups of participants: healthy controls, medication naïve de novo PD patients, and treated PD patients both ‘on’ and ‘off’ dopaminergic medication. Four finger tapping intervals (ISI were used: 250ms, 500ms, 1000ms and 2000ms. Categorical predictors (group, ISI, and phase were used to predict accuracy and variability using a linear mixed model. Accuracy was defined as the relative error of a tap, and variability as the deviation of the participant’s tap from group predicted relative error. Our primary finding is that the treated PD group (PD patients ‘on’ and ‘off’ dopaminergic therapy showed a significantly different pattern of accuracy compared to the de novo group and the healthy controls at the 250ms interval. At this interval, the treated PD patients performed ‘ahead’ of the beat whilst the other groups performed ‘behind’ the beat. We speculate that this ‘hastening’ relates to the clinical phenomenon of motor festination. Across all groups, variability was smallest for both phases at the 500 ms interval, suggesting an innate preference for finger tapping within this range. Tapping variability for the two phases became increasingly divergent at the longer intervals, with worse performance in the continuation phase. The data suggest that patients with PD can be best discriminated from healthy controls on measures of

  9. Recent changes in county-level corn yield variability in the United States from observations and crop models

    Energy Technology Data Exchange (ETDEWEB)

    Leng, Guoyong

    2017-12-01

    The United States is responsible for 35% and 60% of global corn supply and exports. Enhanced supply stability through a reduction in the year-to-year variability of US corn yield would greatly benefit global food security. Important in this regard is to understand how corn yield variability has evolved geographically in the history and how it relates to climatic and non-climatic factors. Results showed that year-to-year variation of US corn yield has decreased significantly during 1980-2010, mainly in Midwest Corn Belt, Nebraska and western arid regions. Despite the country-scale decreasing variability, corn yield variability exhibited an increasing trend in South Dakota, Texas and Southeast growing regions, indicating the importance of considering spatial scales in estimating yield variability. The observed pattern is partly reproduced by process-based crop models, simulating larger areas experiencing increasing variability and underestimating the magnitude of decreasing variability. And 3 out of 11 models even produced a differing sign of change from observations. Hence, statistical model which produces closer agreement with observations is used to explore the contribution of climatic and non-climatic factors to the changes in yield variability. It is found that climate variability dominate the change trends of corn yield variability in the Midwest Corn Belt, while the ability of climate variability in controlling yield variability is low in southeastern and western arid regions. Irrigation has largely reduced the corn yield variability in regions (e.g. Nebraska) where separate estimates of irrigated and rain-fed corn yield exist, demonstrating the importance of non-climatic factors in governing the changes in corn yield variability. The results highlight the distinct spatial patterns of corn yield variability change as well as its influencing factors at the county scale. I also caution the use of process-based crop models, which have substantially underestimated

  10. Variable setpoint as a relaxing component in physiological control.

    Science.gov (United States)

    Risvoll, Geir B; Thorsen, Kristian; Ruoff, Peter; Drengstig, Tormod

    2017-09-01

    Setpoints in physiology have been a puzzle for decades, and especially the notion of fixed or variable setpoints have received much attention. In this paper, we show how previously presented homeostatic controller motifs, extended with saturable signaling kinetics, can be described as variable setpoint controllers. The benefit of a variable setpoint controller is that an observed change in the concentration of the regulated biochemical species (the controlled variable) is fully characterized, and is not considered a deviation from a fixed setpoint. The variation in this biochemical species originate from variation in the disturbances (the perturbation), and thereby in the biochemical species representing the controller (the manipulated variable). Thus, we define an operational space which is spanned out by the combined high and low levels of the variations in (1) the controlled variable, (2) the manipulated variable, and (3) the perturbation. From this operational space, we investigate whether and how it imposes constraints on the different motif parameters, in order for the motif to represent a mathematical model of the regulatory system. Further analysis of the controller's ability to compensate for disturbances reveals that a variable setpoint represents a relaxing component for the controller, in that the necessary control action is reduced compared to that of a fixed setpoint controller. Such a relaxing component might serve as an important property from an evolutionary point of view. Finally, we illustrate the principles using the renal sodium and aldosterone regulatory system, where we model the variation in plasma sodium as a function of salt intake. We show that the experimentally observed variations in plasma sodium can be interpreted as a variable setpoint regulatory system. © 2017 The Authors. Physiological Reports published by Wiley Periodicals, Inc. on behalf of The Physiological Society and the American Physiological Society.

  11. Definably compact groups definable in real closed fields.II

    OpenAIRE

    Barriga, Eliana

    2017-01-01

    We continue the analysis of definably compact groups definable in a real closed field $\\mathcal{R}$. In [3], we proved that for every definably compact definably connected semialgebraic group $G$ over $\\mathcal{R}$ there are a connected $R$-algebraic group $H$, a definable injective map $\\phi$ from a generic definable neighborhood of the identity of $G$ into the group $H\\left(R\\right)$ of $R$-points of $H$ such that $\\phi$ acts as a group homomorphism inside its domain. The above result and o...

  12. The selection of a mode of urban transportation: Integrating psychological variables to discrete choice models

    International Nuclear Information System (INIS)

    Cordoba Maquilon, Jorge E; Gonzalez Calderon, Carlos A; Posada Henao, John J

    2011-01-01

    A study using revealed preference surveys and psychological tests was conducted. Key psychological variables of behavior involved in the choice of transportation mode in a population sample of the Metropolitan Area of the Valle de Aburra were detected. The experiment used the random utility theory for discrete choice models and reasoned action in order to assess beliefs. This was used as a tool for analysis of the psychological variables using the sixteen personality factor questionnaire (16PF test). In addition to the revealed preference surveys, two other surveys were carried out: one with socio-economic characteristics and the other with latent indicators. This methodology allows for an integration of discrete choice models and latent variables. The integration makes the model operational and quantifies the unobservable psychological variables. The most relevant result obtained was that anxiety affects the choice of urban transportation mode and shows that physiological alterations, as well as problems in perception and beliefs, can affect the decision-making process.

  13. Modeling the nitrogen cycling and plankton productivity in the Black Sea using a three-dimensional interdisciplinary model

    NARCIS (Netherlands)

    Grégoire, M.; Soetaert, K.E.R.; Nezlin, N.; Kostianoy, A.

    2004-01-01

    A six-compartment ecosystem model defined by a simple nitrogen cycle is coupled with a general circulation model in the Black Sea so as to examine the seasonal variability of the ecohydrodynamics. Model results show that the annual cycle of the biological productivity of the whole basin is

  14. Validation of Generic Models for Variable Speed Operation Wind Turbines Following the Recent Guidelines Issued by IEC 61400-27

    Directory of Open Access Journals (Sweden)

    Andrés Honrubia-Escribano

    2016-12-01

    Full Text Available Considerable efforts are currently being made by several international working groups focused on the development of generic, also known as simplified or standard, wind turbine models for power system stability studies. In this sense, the first edition of International Electrotechnical Commission (IEC 61400-27-1, which defines generic dynamic simulation models for wind turbines, was published in February 2015. Nevertheless, the correlations of the IEC generic models with respect to specific wind turbine manufacturer models are required by the wind power industry to validate the accuracy and corresponding usability of these standard models. The present work conducts the validation of the two topologies of variable speed wind turbines that present not only the largest market share, but also the most technological advances. Specifically, the doubly-fed induction machine and the full-scale converter (FSC topology are modeled based on the IEC 61400-27-1 guidelines. The models are simulated for a wide range of voltage dips with different characteristics and wind turbine operating conditions. The simulated response of the IEC generic model is compared to the corresponding simplified model of a wind turbine manufacturer, showing a good correlation in most cases. Validation error sources are analyzed in detail, as well. In addition, this paper reviews in detail the previous work done in this field. Results suggest that wind turbine manufacturers are able to adjust the IEC generic models to represent the behavior of their specific wind turbines for power system stability analysis.

  15. Uncertainty importance measure for models with correlated normal variables

    International Nuclear Information System (INIS)

    Hao, Wenrui; Lu, Zhenzhou; Wei, Pengfei

    2013-01-01

    In order to explore the contributions by correlated input variables to the variance of the model output, the contribution decomposition of the correlated input variables based on Mara's definition is investigated in detail. By taking the quadratic polynomial output without cross term as an illustration, the solution of the contribution decomposition is derived analytically using the statistical inference theory. After the correction of the analytical solution is validated by the numerical examples, they are employed to two engineering examples to show their wide application. The derived analytical solutions can directly be used to recognize the contributions by the correlated input variables in case of the quadratic or linear polynomial output without cross term, and the analytical inference method can be extended to the case of higher order polynomial output. Additionally, the origins of the interaction contribution of the correlated inputs are analyzed, and the comparisons of the existing contribution indices are completed, on which the engineer can select the suitable indices to know the necessary information. At last, the degeneration of the correlated inputs to the uncorrelated ones and some computational issues are discussed in concept

  16. Can natural variability trigger effects on fish and fish habitat as defined in environment Canada's metal mining environmental effects monitoring program?

    Science.gov (United States)

    Mackey, Robin; Rees, Cassandra; Wells, Kelly; Pham, Samantha; England, Kent

    2013-01-01

    The Metal Mining Effluent Regulations (MMER) took effect in 2002 and require most metal mining operations in Canada to complete environmental effects monitoring (EEM) programs. An "effect" under the MMER EEM program is considered any positive or negative statistically significant difference in fish population, fish usability, or benthic invertebrate community EEM-defined endpoints. Two consecutive studies with the same statistically significant differences trigger more intensive monitoring, including the characterization of extent and magnitude and investigation of cause. Standard EEM study designs do not require multiple reference areas or preexposure sampling, thus results and conclusions about mine effects are highly contingent on the selection of a near perfect reference area and are at risk of falsely labeling natural variation as mine related "effects." A case study was completed to characterize the natural variability in EEM-defined endpoints during preexposure or baseline conditions. This involved completing a typical EEM study in future reference and exposure lakes surrounding a proposed uranium (U) mine in northern Saskatchewan, Canada. Moon Lake was sampled as the future exposure area as it is currently proposed to receive effluent from the U mine. Two reference areas were used: Slush Lake for both the fish population and benthic invertebrate community surveys and Lake C as a second reference area for the benthic invertebrate community survey. Moon Lake, Slush Lake, and Lake C are located in the same drainage basin in close proximity to one another. All 3 lakes contained similar water quality, fish communities, aquatic habitat, and a sediment composition largely comprised of fine-textured particles. The fish population survey consisted of a nonlethal northern pike (Esox lucius) and a lethal yellow perch (Perca flavescens) survey. A comparison of the 5 benthic invertebrate community effect endpoints, 4 nonlethal northern pike population effect endpoints

  17. Internal and external North Atlantic Sector variability in the Kiel climate model

    Energy Technology Data Exchange (ETDEWEB)

    Latif, Mojib; Park, Wonsun; Ding, Hui; Keenlyside, Noel S. [Leibniz-Inst. fuer Meereswissenschaften, Kiel (Germany)

    2009-08-15

    The internal and external North Atlantic Sector variability is investigated by means of a multimillennial control run and forced experiments with the Kiel Climate Model (KCM). The internal variability is studied by analyzing the control run. The externally forced variability is investigated in a run with periodic millennial solar forcing and in greenhouse warming experiments with enhanced carbon dioxide concentrations. The surface air temperature (SAT) averaged over the Northern Hemisphere simulated in the control run displays enhanced variability relative to the red background at decadal, centennial, and millennial timescales. Special emphasis is given to the variability of the Meridional Overturning Circulation (MOC). The MOC plays an important role in the generation of internal climate modes. Furthermore, the MOC provides a strong negative feedback on the Northern Hemisphere SAT in both the solar and greenhouse warming experiments, thereby moderating the direct effects of the external forcing in the North Atlantic. The implications of the results for decadal predictability are discussed. (orig.)

  18. Seasonal variability of salinity and circulation in a silled estuarine fjord: A numerical model study

    Science.gov (United States)

    Kawase, Mitsuhiro; Bang, Bohyun

    2013-12-01

    A three-dimensional hydrodynamic model is used to study seasonal variability of circulation and hydrography in Hood Canal, Washington, United States, an estuarine fjord that develops seasonally hypoxic conditions. The model is validated with data from year 2006, and is shown to be capable of quantitatively realistic simulation of hydrographic variability. Sensitivity experiments show the largest cause of seasonal variability to be that of salinity at the mouth of the fjord, which drives an annual deep water renewal in late summer-early autumn. Variability of fresh water input from the watershed also causes significant but secondary changes, especially in winter. Local wind stress has little effect over the seasonal timescale. Further experiments, in which one forcing parameter is abruptly altered while others are kept constant, show that outside salinity change induces an immediate response in the exchange circulation that, however, decays as a transient as the system equilibrates. In contrast, a change in the river input initiates gradual adjustment towards a new equilibrium value for the exchange transport. It is hypothesized that the spectral character of the system response to river variability will be redder than to salinity variability. This is demonstrated with a stochastically forced, semi-analytical model of fjord exchange circulation. While the exchange circulation in Hood Canal appears less sensitive to the river variability than to the outside hydrography at seasonal timescales, at decadal and longer timescales both could become significant factors in affecting the exchange circulation.

  19. The use of ZIP and CART to model cryptosporidiosis in relation to climatic variables.

    Science.gov (United States)

    Hu, Wenbiao; Mengersen, Kerrie; Fu, Shiu-Yun; Tong, Shilu

    2010-07-01

    This research assesses the potential impact of weekly weather variability on the incidence of cryptosporidiosis disease using time series zero-inflated Poisson (ZIP) and classification and regression tree (CART) models. Data on weather variables, notified cryptosporidiosis cases and population size in Brisbane were supplied by the Australian Bureau of Meteorology, Queensland Department of Health, and Australian Bureau of Statistics, respectively. Both time series ZIP and CART models show a clear association between weather variables (maximum temperature, relative humidity, rainfall and wind speed) and cryptosporidiosis disease. The time series CART models indicated that, when weekly maximum temperature exceeded 31 degrees C and relative humidity was less than 63%, the relative risk of cryptosporidiosis rose by 13.64 (expected morbidity: 39.4; 95% confidence interval: 30.9-47.9). These findings may have applications as a decision support tool in planning disease control and risk-management programs for cryptosporidiosis disease.

  20. Dynamic modeling of fixed-bed adsorption of flue gas using a variable mass transfer model

    International Nuclear Information System (INIS)

    Park, Jehun; Lee, Jae W.

    2016-01-01

    This study introduces a dynamic mass transfer model for the fixed-bed adsorption of a flue gas. The derivation of the variable mass transfer coefficient is based on pore diffusion theory and it is a function of effective porosity, temperature, and pressure as well as the adsorbate composition. Adsorption experiments were done at four different pressures (1.8, 5, 10 and 20 bars) and three different temperatures (30, 50 and 70 .deg. C) with zeolite 13X as the adsorbent. To explain the equilibrium adsorption capacity, the Langmuir-Freundlich isotherm model was adopted, and the parameters of the isotherm equation were fitted to the experimental data for a wide range of pressures and temperatures. Then, dynamic simulations were performed using the system equations for material and energy balance with the equilibrium adsorption isotherm data. The optimal mass transfer and heat transfer coefficients were determined after iterative calculations. As a result, the dynamic variable mass transfer model can estimate the adsorption rate for a wide range of concentrations and precisely simulate the fixed-bed adsorption process of a flue gas mixture of carbon dioxide and nitrogen.

  1. Evaluation of Stochastic Rainfall Models in Capturing Climate Variability for Future Drought and Flood Risk Assessment

    Science.gov (United States)

    Chowdhury, A. F. M. K.; Lockart, N.; Willgoose, G. R.; Kuczera, G. A.; Kiem, A.; Nadeeka, P. M.

    2016-12-01

    One of the key objectives of stochastic rainfall modelling is to capture the full variability of climate system for future drought and flood risk assessment. However, it is not clear how well these models can capture the future climate variability when they are calibrated to Global/Regional Climate Model data (GCM/RCM) as these datasets are usually available for very short future period/s (e.g. 20 years). This study has assessed the ability of two stochastic daily rainfall models to capture climate variability by calibrating them to a dynamically downscaled RCM dataset in an east Australian catchment for 1990-2010, 2020-2040, and 2060-2080 epochs. The two stochastic models are: (1) a hierarchical Markov Chain (MC) model, which we developed in a previous study and (2) a semi-parametric MC model developed by Mehrotra and Sharma (2007). Our hierarchical model uses stochastic parameters of MC and Gamma distribution, while the semi-parametric model uses a modified MC process with memory of past periods and kernel density estimation. This study has generated multiple realizations of rainfall series by using parameters of each model calibrated to the RCM dataset for each epoch. The generated rainfall series are used to generate synthetic streamflow by using a SimHyd hydrology model. Assessing the synthetic rainfall and streamflow series, this study has found that both stochastic models can incorporate a range of variability in rainfall as well as streamflow generation for both current and future periods. However, the hierarchical model tends to overestimate the multiyear variability of wet spell lengths (therefore, is less likely to simulate long periods of drought and flood), while the semi-parametric model tends to overestimate the mean annual rainfall depths and streamflow volumes (hence, simulated droughts are likely to be less severe). Sensitivity of these limitations of both stochastic models in terms of future drought and flood risk assessment will be discussed.

  2. A new control-oriented transient model of variable geometry turbocharger

    International Nuclear Information System (INIS)

    Bahiuddin, Irfan; Mazlan, Saiful Amri; Imaduddin, Fitrian; Ubaidillah

    2017-01-01

    The flow input of a variable geometry turbocharger turbine is highly unsteady due to rapid and periodic pressure dynamics in engine combustion chambers. Several VGT control methods have been developed to recover more energy from the highly pulsating exhaust gas flow. To develop a control system for the highly pulsating flow condition, an accurate and valid unsteady model is required. This study focuses on the derivation of governing the unsteady control-oriented model (COM) for a turbine of an actively controlled turbocharger (ACT). The COM has the capability to predict the turbocharger behaviour regarding the instantaneous turbine actual and isentropic powers in different effective throat areas. The COM is a modified version of a conventional mean value model (MVM) with an additional feature to calculate the turbine angular velocity and torque for determining the actual power. The simulation results were further compared with experimental data in two general scenarios. The first scenario was simulations on fixed geometry positions. The second simulation scenario considered the nozzle movement after receiving a signal from the controller in different cases. The comparison between simulation and experimental results showed similarities in the recovered power behaviours the turbine inlet area increases or vice versa. The model also has proved its reliability to replicate general behaviour as in the example of ACT cases presented in this paper. However, the model is incapable to replicate the detailed and complicated phenomena, such as choking effect and hysteresis effect. - Highlights: • A control-oriented model of a variable geometry turbocharger turbine is proposed. • Isentropic and actual power behaviour estimations on turbocharger turbine. • A simulation tool for developing active control systems of turbocharger turbines.

  3. How reliable is the offline linkage of Weather Research & Forecasting Model (WRF) and Variable Infiltration Capacity (VIC) model?

    Science.gov (United States)

    The aim for this research is to evaluate the ability of the offline linkage of Weather Research & Forecasting Model (WRF) and Variable Infiltration Capacity (VIC) model to produce hydrological, e.g. evaporation (ET), soil moisture (SM), runoff, and baseflow. First, the VIC mo...

  4. User-Defined Clocks in the Real-Time Specification for Java

    DEFF Research Database (Denmark)

    Wellings, Andy; Schoeberl, Martin

    2011-01-01

    This paper analyses the new user-defined clock model that is to be supported in Version 1.1 of the Real-Time Specification for Java (RTSJ). The model is a compromise between the current position, where there is no support for user-defined clocks, and a fully integrated model. The paper investigat...

  5. Joint Bayesian variable and graph selection for regression models with network-structured predictors

    Science.gov (United States)

    Peterson, C. B.; Stingo, F. C.; Vannucci, M.

    2015-01-01

    In this work, we develop a Bayesian approach to perform selection of predictors that are linked within a network. We achieve this by combining a sparse regression model relating the predictors to a response variable with a graphical model describing conditional dependencies among the predictors. The proposed method is well-suited for genomic applications since it allows the identification of pathways of functionally related genes or proteins which impact an outcome of interest. In contrast to previous approaches for network-guided variable selection, we infer the network among predictors using a Gaussian graphical model and do not assume that network information is available a priori. We demonstrate that our method outperforms existing methods in identifying network-structured predictors in simulation settings, and illustrate our proposed model with an application to inference of proteins relevant to glioblastoma survival. PMID:26514925

  6. A Variable Flow Modelling Approach To Military End Strength Planning

    Science.gov (United States)

    2016-12-01

    function. The MLRPS is more complex than the variable flow model as it has to cater for a force structure that is much larger than just the MT branch...essential positions in a Ship’s complement, or by the biggest current deficit in forecast end strength. The model can be adjusted to cater for any of these...is unlikely that the RAN will be able to cater for such an increase in hires, so this scenario is not likely to solve their problem. Each transition

  7. Panel data models extended to spatial error autocorrelation or a spatially lagged dependent variable

    NARCIS (Netherlands)

    Elhorst, J. Paul

    2001-01-01

    This paper surveys panel data models extended to spatial error autocorrelation or a spatially lagged dependent variable. In particular, it focuses on the specification and estimation of four panel data models commonly used in applied research: the fixed effects model, the random effects model, the

  8. Geospatial models of climatological variables distribution over Colombian territory

    International Nuclear Information System (INIS)

    Baron Leguizamon, Alicia

    2003-01-01

    Diverse studies have dealt on the existing relation between the variables temperature about the air and precipitation with the altitude; nevertheless they have been precise analyses or by regions, but no of them has gotten to constitute itself in a tool that reproduces the space distribution, of the temperature or the precipitation, taking into account orography and allowing to obtain from her data on these variables in a certain place. Cradle in the raised relation and from the multi-annual monthly information of the temperature of the air and the precipitation, it was calculated the vertical gradients of temperature and the related the precipitation to the altitude. After it, with base in the data of altitude provided by the DEM, one calculated the values of temperature and precipitation, and those values were interpolated to generate geospatial models monthly

  9. Action-angle variables for the harmonic oscillator : ambiguity spin x duplication spin

    International Nuclear Information System (INIS)

    Oliveira, C.R. de; Malta, C.P.

    1983-08-01

    The difficulties of obtaining for the harmonic oscillator a well defined unitary transformation to action-angle variables were overcome by M. Moshinsky and T.H. Seligman through the introduction of a spinlike variable (ambiguity spin) from a classical point of view. The difficulty of defining a unitary phase operator for the harmonic oscillator was overcome by Roger G. Newton also through the introduction of a spinlike variable (named duplication spin by us) but within a quantum framework. The relation between the ambiguity spin and the duplication spin by introducing these two types of spins in the canonical transformation to action-angle variables is investigated. Doing this it is possible to obtain both well defined unitary transformation and phase operator. (Author) [pt

  10. Modeling the influence of atmospheric leading modes on the variability of the Arctic freshwater cycle

    Science.gov (United States)

    Niederdrenk, L.; Sein, D.; Mikolajewicz, U.

    2013-12-01

    Global general circulation models show remarkable differences in modeling the Arctic freshwater cycle. While they agree on the general sinks and sources of the freshwater budget, they differ largely in the magnitude of the mean values as well as in the variability of the freshwater terms. Regional models can better resolve the complex topography and small scale processes, but they are often uncoupled, thus missing the air-sea interaction. Additionally, regional models mostly use some kind of salinity restoring or flux correction, thus disturbing the freshwater budget. Our approach to investigate the Arctic hydrologic cycle and its variability is a regional atmosphere-ocean model setup, consisting of the global ocean model MPIOM with high resolution in the Arctic coupled to the regional atmosphere model REMO. The domain of the atmosphere model covers all catchment areas of the rivers draining into the Arctic. To account for all sinks and sources of freshwater in the Arctic, we include a discharge model providing terrestrial lateral waterflows. We run the model without salinity restoring but with freshwater correction, which is set to zero in the Arctic. This allows for the analysis of a closed freshwater budget in the Artic region. We perform experiments for the second half of the 20th century and use data from the global model MPIOM/ECHAM5 performed with historical conditions, that was used within the 4th Assessment Report of the IPCC, as forcing for our regional model. With this setup, we investigate how the dominant modes of large-scale atmospheric variability impact the variability in the freshwater components. We focus on the two leading empirical orthogonal functions of winter mean sea level pressure, as well as on the North Atlantic Oscillation and the Siberian High. These modes have a large impact on the Arctic Ocean circulation as well as on the solid and liquid export through Fram Strait and through the Canadian archipelago. However, they cannot explain

  11. Classification criteria of syndromes by latent variable models

    DEFF Research Database (Denmark)

    Petersen, Janne

    2010-01-01

    , although this is often desired. I have proposed a new method for predicting class membership that, in contrast to methods based on posterior probabilities of class membership, yields consistent estimates when regressed on explanatory variables in a subsequent analysis. There are four different basic models...... analyses. Part 1: HALS engages different phenotypic changes of peripheral lipoatrophy and central lipohypertrophy.  There are several different definitions of HALS and no consensus on the number of phenotypes. Many of the definitions consist of counting fulfilled criteria on markers and do not include...

  12. Toward a Unified Representation of Atmospheric Convection in Variable-Resolution Climate Models

    Energy Technology Data Exchange (ETDEWEB)

    Walko, Robert [Univ. of Miami, Coral Gables, FL (United States)

    2016-11-07

    The purpose of this project was to improve the representation of convection in atmospheric weather and climate models that employ computational grids with spatially-variable resolution. Specifically, our work targeted models whose grids are fine enough over selected regions that convection is resolved explicitly, while over other regions the grid is coarser and convection is represented as a subgrid-scale process. The working criterion for a successful scheme for representing convection over this range of grid resolution was that identical convective environments must produce very similar convective responses (i.e., the same precipitation amount, rate, and timing, and the same modification of the atmospheric profile) regardless of grid scale. The need for such a convective scheme has increased in recent years as more global weather and climate models have adopted variable resolution meshes that are often extended into the range of resolving convection in selected locations.

  13. A multiprofessional information model for Brazilian primary care: Defining a consensus model towards an interoperable electronic health record.

    Science.gov (United States)

    Braga, Renata Dutra

    2016-06-01

    To develop a multiprofessional information model to be used in the decision-making process in primary care in Brazil. This was an observational study with a descriptive and exploratory approach, using action research associated with the Delphi method. A group of 13 health professionals made up a panel of experts that, through individual and group meetings, drew up a preliminary health information records model. The questionnaire used to validate this model included four questions based on a Likert scale. These questions evaluated the completeness and relevance of information on each of the four pillars that composed the model. The changes suggested in each round of evaluation were included when accepted by the majority (≥ 50%). This process was repeated as many times as necessary to obtain the desirable and recommended consensus level (> 50%), and the final version became the consensus model. Multidisciplinary health training of the panel of experts allowed a consensus model to be obtained based on four categories of health information, called pillars: Data Collection, Diagnosis, Care Plan and Evaluation. The obtained consensus model was considered valid by the experts and can contribute to the collection and recording of multidisciplinary information in primary care, as well as the identification of relevant concepts for defining electronic health records at this level of complexity in health care. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  14. Incorporating soil variability in continental soil water modelling: a trade-off between data availability and model complexity

    Science.gov (United States)

    Peeters, L.; Crosbie, R. S.; Doble, R.; van Dijk, A. I. J. M.

    2012-04-01

    Developing a continental land surface model implies finding a balance between the complexity in representing the system processes and the availability of reliable data to drive, parameterise and calibrate the model. While a high level of process understanding at plot or catchment scales may warrant a complex model, such data is not available at the continental scale. This data sparsity is especially an issue for the Australian Water Resources Assessment system, AWRA-L, a land-surface model designed to estimate the components of the water balance for the Australian continent. This study focuses on the conceptualization and parametrization of the soil drainage process in AWRA-L. Traditionally soil drainage is simulated with Richards' equation, which is highly non-linear. As general analytic solutions are not available, this equation is usually solved numerically. In AWRA-L however, we introduce a simpler function based on simulation experiments that solve Richards' equation. In the simplified function soil drainage rate, the ratio of drainage (D) over storage (S), decreases exponentially with relative water content. This function is controlled by three parameters, the soil water storage at field capacity (SFC), the drainage fraction at field capacity (KFC) and a drainage function exponent (β). [ ] D- -S- S = KF C exp - β (1 - SFC ) To obtain spatially variable estimates of these three parameters, the Atlas of Australian Soils is used, which lists soil hydraulic properties for each soil profile type. For each soil profile type in the Atlas, 10 days of draining an initially fully saturated, freely draining soil is simulated using HYDRUS-1D. With field capacity defined as the volume of water in the soil after 1 day, the remaining parameters can be obtained by fitting the AWRA-L soil drainage function to the HYDRUS-1D results. This model conceptualisation fully exploits the data available in the Atlas of Australian Soils, without the need to solve the non

  15. Improvement in latent variable indirect response modeling of multiple categorical clinical endpoints: application to modeling of guselkumab treatment effects in psoriatic patients.

    Science.gov (United States)

    Hu, Chuanpu; Randazzo, Bruce; Sharma, Amarnath; Zhou, Honghui

    2017-10-01

    Exposure-response modeling plays an important role in optimizing dose and dosing regimens during clinical drug development. The modeling of multiple endpoints is made possible in part by recent progress in latent variable indirect response (IDR) modeling for ordered categorical endpoints. This manuscript aims to investigate the level of improvement achievable by jointly modeling two such endpoints in the latent variable IDR modeling framework through the sharing of model parameters. This is illustrated with an application to the exposure-response of guselkumab, a human IgG1 monoclonal antibody in clinical development that blocks IL-23. A Phase 2b study was conducted in 238 patients with psoriasis for which disease severity was assessed using Psoriasis Area and Severity Index (PASI) and Physician's Global Assessment (PGA) scores. A latent variable Type I IDR model was developed to evaluate the therapeutic effect of guselkumab dosing on 75, 90 and 100% improvement of PASI scores from baseline and PGA scores, with placebo effect empirically modeled. The results showed that the joint model is able to describe the observed data better with fewer parameters compared with the common approach of separately modeling the endpoints.

  16. Remote Sensing-Driven Climatic/Environmental Variables for Modelling Malaria Transmission in Sub-Saharan Africa

    Directory of Open Access Journals (Sweden)

    Osadolor Ebhuoma

    2016-06-01

    Full Text Available Malaria is a serious public health threat in Sub-Saharan Africa (SSA, and its transmission risk varies geographically. Modelling its geographic characteristics is essential for identifying the spatial and temporal risk of malaria transmission. Remote sensing (RS has been serving as an important tool in providing and assessing a variety of potential climatic/environmental malaria transmission variables in diverse areas. This review focuses on the utilization of RS-driven climatic/environmental variables in determining malaria transmission in SSA. A systematic search on Google Scholar and the Institute for Scientific Information (ISI Web of KnowledgeSM databases (PubMed, Web of Science and ScienceDirect was carried out. We identified thirty-five peer-reviewed articles that studied the relationship between remotely-sensed climatic variable(s and malaria epidemiological data in the SSA sub-regions. The relationship between malaria disease and different climatic/environmental proxies was examined using different statistical methods. Across the SSA sub-region, the normalized difference vegetation index (NDVI derived from either the National Oceanic and Atmospheric Administration (NOAA Advanced Very High Resolution Radiometer (AVHRR or Moderate-resolution Imaging Spectrometer (MODIS satellite sensors was most frequently returned as a statistically-significant variable to model both spatial and temporal malaria transmission. Furthermore, generalized linear models (linear regression, logistic regression and Poisson regression were the most frequently-employed methods of statistical analysis in determining malaria transmission predictors in East, Southern and West Africa. By contrast, multivariate analysis was used in Central Africa. We stress that the utilization of RS in determining reliable malaria transmission predictors and climatic/environmental monitoring variables would require a tailored approach that will have cognizance of the geographical

  17. Remote Sensing-Driven Climatic/Environmental Variables for Modelling Malaria Transmission in Sub-Saharan Africa.

    Science.gov (United States)

    Ebhuoma, Osadolor; Gebreslasie, Michael

    2016-06-14

    Malaria is a serious public health threat in Sub-Saharan Africa (SSA), and its transmission risk varies geographically. Modelling its geographic characteristics is essential for identifying the spatial and temporal risk of malaria transmission. Remote sensing (RS) has been serving as an important tool in providing and assessing a variety of potential climatic/environmental malaria transmission variables in diverse areas. This review focuses on the utilization of RS-driven climatic/environmental variables in determining malaria transmission in SSA. A systematic search on Google Scholar and the Institute for Scientific Information (ISI) Web of Knowledge(SM) databases (PubMed, Web of Science and ScienceDirect) was carried out. We identified thirty-five peer-reviewed articles that studied the relationship between remotely-sensed climatic variable(s) and malaria epidemiological data in the SSA sub-regions. The relationship between malaria disease and different climatic/environmental proxies was examined using different statistical methods. Across the SSA sub-region, the normalized difference vegetation index (NDVI) derived from either the National Oceanic and Atmospheric Administration (NOAA) Advanced Very High Resolution Radiometer (AVHRR) or Moderate-resolution Imaging Spectrometer (MODIS) satellite sensors was most frequently returned as a statistically-significant variable to model both spatial and temporal malaria transmission. Furthermore, generalized linear models (linear regression, logistic regression and Poisson regression) were the most frequently-employed methods of statistical analysis in determining malaria transmission predictors in East, Southern and West Africa. By contrast, multivariate analysis was used in Central Africa. We stress that the utilization of RS in determining reliable malaria transmission predictors and climatic/environmental monitoring variables would require a tailored approach that will have cognizance of the geographical

  18. The Effect of Macroeconomic Variables on Value-Added Agriculture: Approach of Vector Autoregresive Bayesian Model (BVAR

    Directory of Open Access Journals (Sweden)

    E. Pishbahar

    2015-05-01

    Full Text Available There are different ideas and opinions about the effects of macroeconomic variables on real and nominal variables. To answer the question of whether changes in macroeconomic variables as a political tool is useful over a business cycle, understanding the effect of macroeconomic variables on economic growth is important. In the present study, the Bayesian Vector autoregresive model and seasonality data for the years between 1991 and 2013 was used to determine the impact of monetary policy on value-added agriculture. Predicts of Vector autoregresive model are usually divertaed due to a lot of parameters in the model. Bayesian vector autoregresive model estimates more reliable predictions due to reducing the number of included parametrs and considering the former models. Compared to the Vector Autoregressive model, the coefficients are estimated more accurately. Based on the results of RMSE in this study, previous function Nrmal-Vyshart was identified as a suitable previous disteribution. According to the results of the impulse response function, the sudden effects of shocks in macroeconomic variables on the value added in agriculture and domestic venture capital are stable. The effects on the exchange rates, tax revenues and monetary will bemoderated after 7, 5 and 4periods. Monetary policy shocks ,in the first half of the year, increased the value added of agriculture, while in the second half of the year had a depressing effect on the value added.

  19. Improving plot- and regional-scale crop models for simulating impacts of climate variability and extremes

    Science.gov (United States)

    Tao, F.; Rötter, R.

    2013-12-01

    Many studies on global climate report that climate variability is increasing with more frequent and intense extreme events1. There are quite large uncertainties from both the plot- and regional-scale models in simulating impacts of climate variability and extremes on crop development, growth and productivity2,3. One key to reducing the uncertainties is better exploitation of experimental data to eliminate crop model deficiencies and develop better algorithms that more adequately capture the impacts of extreme events, such as high temperature and drought, on crop performance4,5. In the present study, in a first step, the inter-annual variability in wheat yield and climate from 1971 to 2012 in Finland was investigated. Using statistical approaches the impacts of climate variability and extremes on wheat growth and productivity were quantified. In a second step, a plot-scale model, WOFOST6, and a regional-scale crop model, MCWLA7, were calibrated and validated, and applied to simulate wheat growth and yield variability from 1971-2012. Next, the estimated impacts of high temperature stress, cold damage, and drought stress on crop growth and productivity based on the statistical approaches, and on crop simulation models WOFOST and MCWLA were compared. Then, the impact mechanisms of climate extremes on crop growth and productivity in the WOFOST model and MCWLA model were identified, and subsequently, the various algorithm and impact functions were fitted against the long-term crop trial data. Finally, the impact mechanisms, algorithms and functions in WOFOST model and MCWLA model were improved to better simulate the impacts of climate variability and extremes, particularly high temperature stress, cold damage and drought stress for location-specific and large area climate impact assessments. Our studies provide a good example of how to improve, in parallel, the plot- and regional-scale models for simulating impacts of climate variability and extremes, as needed for

  20. Investigation of clinical pharmacokinetic variability of an opioid antagonist through physiologically based absorption modeling.

    Science.gov (United States)

    Ding, Xuan; He, Minxia; Kulkarni, Rajesh; Patel, Nita; Zhang, Xiaoyu

    2013-08-01

    Identifying the source of inter- and/or intrasubject variability in pharmacokinetics (PK) provides fundamental information in understanding the pharmacokinetics-pharmacodynamics relationship of a drug and project its efficacy and safety in clinical populations. This identification process can be challenging given that a large number of potential causes could lead to PK variability. Here we present an integrated approach of physiologically based absorption modeling to investigate the root cause of unexpectedly high PK variability of a Phase I clinical trial drug. LY2196044 exhibited high intersubject variability in the absorption phase of plasma concentration-time profiles in humans. This could not be explained by in vitro measurements of drug properties and excellent bioavailability with low variability observed in preclinical species. GastroPlus™ modeling suggested that the compound's optimal solubility and permeability characteristics would enable rapid and complete absorption in preclinical species and in humans. However, simulations of human plasma concentration-time profiles indicated that despite sufficient solubility and rapid dissolution of LY2196044 in humans, permeability and/or transit in the gastrointestinal (GI) tract may have been negatively affected. It was concluded that clinical PK variability was potentially due to the drug's antagonism on opioid receptors that affected its transit and absorption in the GI tract. Copyright © 2013 Wiley Periodicals, Inc.

  1. Statistical modeling methods to analyze the impacts of multiunit process variability on critical quality attributes of Chinese herbal medicine tablets

    Directory of Open Access Journals (Sweden)

    Sun F

    2016-11-01

    Full Text Available Fei Sun,1 Bing Xu,1,2 Yi Zhang,1 Shengyun Dai,1 Chan Yang,1 Xianglong Cui,1 Xinyuan Shi,1,2 Yanjiang Qiao1,2 1Research Center of Traditional Chinese Medicine Information Engineering, School of Chinese Materia Medica, Beijing University of Chinese Medicine, 2Key Laboratory of Manufacture Process Control and Quality Evaluation of Chinese Medicine, Beijing, People’s Republic of China Abstract: The quality of Chinese herbal medicine tablets suffers from batch-to-batch variability due to a lack of manufacturing process understanding. In this paper, the Panax notoginseng saponins (PNS immediate release tablet was taken as the research subject. By defining the dissolution of five active pharmaceutical ingredients and the tablet tensile strength as critical quality attributes (CQAs, influences of both the manipulated process parameters introduced by an orthogonal experiment design and the intermediate granules’ properties on the CQAs were fully investigated by different chemometric methods, such as the partial least squares, the orthogonal projection to latent structures, and the multiblock partial least squares (MBPLS. By analyzing the loadings plots and variable importance in the projection indexes, the granule particle sizes and the minimal punch tip separation distance in tableting were identified as critical process parameters. Additionally, the MBPLS model suggested that the lubrication time in the final blending was also important in predicting tablet quality attributes. From the calculated block importance in the projection indexes, the tableting unit was confirmed to be the critical process unit of the manufacturing line. The results demonstrated that the combinatorial use of different multivariate modeling methods could help in understanding the complex process relationships as a whole. The output of this study can then be used to define a control strategy to improve the quality of the PNS immediate release tablet. Keywords: Panax

  2. Evaluating measurement models in clinical research: covariance structure analysis of latent variable models of self-conception.

    Science.gov (United States)

    Hoyle, R H

    1991-02-01

    Indirect measures of psychological constructs are vital to clinical research. On occasion, however, the meaning of indirect measures of psychological constructs is obfuscated by statistical procedures that do not account for the complex relations between items and latent variables and among latent variables. Covariance structure analysis (CSA) is a statistical procedure for testing hypotheses about the relations among items that indirectly measure a psychological construct and relations among psychological constructs. This article introduces clinical researchers to the strengths and limitations of CSA as a statistical procedure for conceiving and testing structural hypotheses that are not tested adequately with other statistical procedures. The article is organized around two empirical examples that illustrate the use of CSA for evaluating measurement models with correlated error terms, higher-order factors, and measured and latent variables.

  3. Up-scaling of multi-variable flood loss models from objects to land use units at the meso-scale

    Science.gov (United States)

    Kreibich, Heidi; Schröter, Kai; Merz, Bruno

    2016-05-01

    Flood risk management increasingly relies on risk analyses, including loss modelling. Most of the flood loss models usually applied in standard practice have in common that complex damaging processes are described by simple approaches like stage-damage functions. Novel multi-variable models significantly improve loss estimation on the micro-scale and may also be advantageous for large-scale applications. However, more input parameters also reveal additional uncertainty, even more in upscaling procedures for meso-scale applications, where the parameters need to be estimated on a regional area-wide basis. To gain more knowledge about challenges associated with the up-scaling of multi-variable flood loss models the following approach is applied: Single- and multi-variable micro-scale flood loss models are up-scaled and applied on the meso-scale, namely on basis of ATKIS land-use units. Application and validation is undertaken in 19 municipalities, which were affected during the 2002 flood by the River Mulde in Saxony, Germany by comparison to official loss data provided by the Saxon Relief Bank (SAB).In the meso-scale case study based model validation, most multi-variable models show smaller errors than the uni-variable stage-damage functions. The results show the suitability of the up-scaling approach, and, in accordance with micro-scale validation studies, that multi-variable models are an improvement in flood loss modelling also on the meso-scale. However, uncertainties remain high, stressing the importance of uncertainty quantification. Thus, the development of probabilistic loss models, like BT-FLEMO used in this study, which inherently provide uncertainty information are the way forward.

  4. Catchment variability and parameter estimation in multi-objective regionalisation of a rainfall-runoff model

    NARCIS (Netherlands)

    Deckers, Dave L.E.H.; Booij, Martijn J.; Rientjes, T.H.M.; Krol, Martinus S.

    2010-01-01

    This study attempts to examine if catchment variability favours regionalisation by principles of catchment similarity. Our work combines calibration of a simple conceptual model for multiple objectives and multi-regression analyses to establish a regional model between model sensitive parameters and

  5. Modelling global water stress of the recent past: on the relative importance of trends in water demand and climate variability

    Science.gov (United States)

    Wada, Y.; van Beek, L. P. H.; Bierkens, M. F. P.

    2011-12-01

    During the past decades, human water use has more than doubled, yet available freshwater resources are finite. As a result, water scarcity has been prevalent in various regions of the world. Here, we present the first global assessment of past development of water stress considering not only climate variability but also growing water demand, desalinated water use and non-renewable groundwater abstraction over the period 1960-2001 at a spatial resolution of 0.5°. Agricultural water demand is estimated based on past extents of irrigated areas and livestock densities. We approximate past economic development based on GDP, energy and household consumption and electricity production, which are subsequently used together with population numbers to estimate industrial and domestic water demand. Climate variability is expressed by simulated blue water availability defined by freshwater in rivers, lakes, wetlands and reservoirs by means of the global hydrological model PCR-GLOBWB. We thus define blue water stress by comparing blue water availability with corresponding net total blue water demand by means of the commonly used, Water Scarcity Index. The results show a drastic increase in the global population living under water-stressed conditions (i.e. moderate to high water stress) due to growing water demand, primarily for irrigation, which has more than doubled from 1708/818 to 3708/1832 km3 yr-1 (gross/net) over the period 1960-2000. We estimate that 800 million people or 27% of the global population were living under water-stressed conditions for 1960. This number is eventually increased to 2.6 billion or 43% for 2000. Our results indicate that increased water demand is a decisive factor for heightened water stress in various regions such as India and North China, enhancing the intensity of water stress up to 200%, while climate variability is often a main determinant of extreme events. However, our results also suggest that in several emerging and developing economies

  6. Automated optimization and construction of chemometric models based on highly variable raw chromatographic data.

    Science.gov (United States)

    Sinkov, Nikolai A; Johnston, Brandon M; Sandercock, P Mark L; Harynuk, James J

    2011-07-04

    Direct chemometric interpretation of raw chromatographic data (as opposed to integrated peak tables) has been shown to be advantageous in many circumstances. However, this approach presents two significant challenges: data alignment and feature selection. In order to interpret the data, the time axes must be precisely aligned so that the signal from each analyte is recorded at the same coordinates in the data matrix for each and every analyzed sample. Several alignment approaches exist in the literature and they work well when the samples being aligned are reasonably similar. In cases where the background matrix for a series of samples to be modeled is highly variable, the performance of these approaches suffers. Considering the challenge of feature selection, when the raw data are used each signal at each time is viewed as an individual, independent variable; with the data rates of modern chromatographic systems, this generates hundreds of thousands of candidate variables, or tens of millions of candidate variables if multivariate detectors such as mass spectrometers are utilized. Consequently, an automated approach to identify and select appropriate variables for inclusion in a model is desirable. In this research we present an alignment approach that relies on a series of deuterated alkanes which act as retention anchors for an alignment signal, and couple this with an automated feature selection routine based on our novel cluster resolution metric for the construction of a chemometric model. The model system that we use to demonstrate these approaches is a series of simulated arson debris samples analyzed by passive headspace extraction, GC-MS, and interpreted using partial least squares discriminant analysis (PLS-DA). Copyright © 2011 Elsevier B.V. All rights reserved.

  7. Predictive modeling and reducing cyclic variability in autoignition engines

    Science.gov (United States)

    Hellstrom, Erik; Stefanopoulou, Anna; Jiang, Li; Larimore, Jacob

    2016-08-30

    Methods and systems are provided for controlling a vehicle engine to reduce cycle-to-cycle combustion variation. A predictive model is applied to predict cycle-to-cycle combustion behavior of an engine based on observed engine performance variables. Conditions are identified, based on the predicted cycle-to-cycle combustion behavior, that indicate high cycle-to-cycle combustion variation. Corrective measures are then applied to prevent the predicted high cycle-to-cycle combustion variation.

  8. Variable-Period Undulators For Synchrotron Radiation

    Science.gov (United States)

    Shenoy, Gopal; Lewellen, John; Shu, Deming; Vinokurov, Nikolai

    2005-02-22

    A new and improved undulator design is provided that enables a variable period length for the production of synchrotron radiation from both medium-energy and high-energy storage rings. The variable period length is achieved using a staggered array of pole pieces made up of high permeability material, permanent magnet material, or an electromagnetic structure. The pole pieces are separated by a variable width space. The sum of the variable width space and the pole width would therefore define the period of the undulator. Features and advantages of the invention include broad photon energy tunability, constant power operation and constant brilliance operation.

  9. Variable-Period Undulators for Synchrotron Radiation

    Energy Technology Data Exchange (ETDEWEB)

    Shenoy, Gopal; Lewellen, John; Shu, Deming; Vinokurov, Nikolai

    2005-02-22

    A new and improved undulator design is provided that enables a variable period length for the production of synchrotron radiation from both medium-energy and high energy storage rings. The variable period length is achieved using a staggered array of pole pieces made up of high permeability material, permanent magnet material, or an electromagnetic structure. The pole pieces are separated by a variable width space. The sum of the variable width space and the pole width would therefore define the period of the undulator. Features and advantages of the invention include broad photon energy tunability, constant power operation and constant brilliance operation.

  10. Correlation Between Fracture Network Properties and Stress Variability in Geological Media

    Science.gov (United States)

    Lei, Qinghua; Gao, Ke

    2018-05-01

    We quantitatively investigate the stress variability in fractured geological media under tectonic stresses. The fracture systems studied include synthetic fracture networks following power law length scaling and natural fracture patterns based on outcrop mapping. The stress field is derived from a finite-discrete element model, and its variability is analyzed using a set of mathematical formulations that honor the tensorial nature of stress data. We show that local stress perturbation, quantified by the Euclidean distance of a local stress tensor to the mean stress tensor, has a positive, linear correlation with local fracture intensity, defined as the total fracture length per unit area within a local sampling window. We also evaluate the stress dispersion of the entire stress field using the effective variance, that is, a scalar-valued measure of the overall stress variability. The results show that a well-connected fracture system under a critically stressed state exhibits strong local and global stress variabilities.

  11. Comparing proxy and model estimates of hydroclimate variability and change over the Common Era

    Science.gov (United States)

    Hydro2k Consortium, Pages

    2017-12-01

    Water availability is fundamental to societies and ecosystems, but our understanding of variations in hydroclimate (including extreme events, flooding, and decadal periods of drought) is limited because of a paucity of modern instrumental observations that are distributed unevenly across the globe and only span parts of the 20th and 21st centuries. Such data coverage is insufficient for characterizing hydroclimate and its associated dynamics because of its multidecadal to centennial variability and highly regionalized spatial signature. High-resolution (seasonal to decadal) hydroclimatic proxies that span all or parts of the Common Era (CE) and paleoclimate simulations from climate models are therefore important tools for augmenting our understanding of hydroclimate variability. In particular, the comparison of the two sources of information is critical for addressing the uncertainties and limitations of both while enriching each of their interpretations. We review the principal proxy data available for hydroclimatic reconstructions over the CE and highlight the contemporary understanding of how these proxies are interpreted as hydroclimate indicators. We also review the available last-millennium simulations from fully coupled climate models and discuss several outstanding challenges associated with simulating hydroclimate variability and change over the CE. A specific review of simulated hydroclimatic changes forced by volcanic events is provided, as is a discussion of expected improvements in estimated radiative forcings, models, and their implementation in the future. Our review of hydroclimatic proxies and last-millennium model simulations is used as the basis for articulating a variety of considerations and best practices for how to perform proxy-model comparisons of CE hydroclimate. This discussion provides a framework for how best to evaluate hydroclimate variability and its associated dynamics using these comparisons and how they can better inform

  12. The effect of the number of seed variables on the performance of Cooke′s classical model

    International Nuclear Information System (INIS)

    Eggstaff, Justin W.; Mazzuchi, Thomas A.; Sarkani, Shahram

    2014-01-01

    In risk analysis, Cooke′s classical model for aggregating expert judgment has been widely used for over 20 years. However, the validity of this model has been the subject of much debate. Critics assert that this model′s scoring rule may unintentionally reward experts who manipulate their quantile estimates in order to receive a greater weight. In addition, the question of the number of seed variables required to ensure adequate performance of Cooke′s classical model remains unanswered. In this study, we conduct a comprehensive examination of the model through an iterative, cross validation test to perform an out-of-sample comparison between Cooke′s classical model and the equal-weight linear opinion pool method on almost all of the expert judgment studies compiled by Cooke and colleagues to date. Our results indicate that Cooke′s classical model significantly outperforms equally weighting expert judgment, regardless of the number of seed variables used; however, there may, in fact, be a maximum number of seed variables beyond which Cooke′s model cannot outperform an equally-weighted panel. - Highlights: • We examine Cooke′s classical model through an iterative, cross validation test. • The performance-based and equally weighted decision makers are compared. • Results strengthen Cooke′s argument for a two-fold cross-validation approach. • Accuracy test results show strong support in favor of Cooke′s classical method. • There may be a maximum number of seed variables that ensures model performance

  13. Multiresponse semiparametric regression for modelling the effect of regional socio-economic variables on the use of information technology

    Science.gov (United States)

    Wibowo, Wahyu; Wene, Chatrien; Budiantara, I. Nyoman; Permatasari, Erma Oktania

    2017-03-01

    Multiresponse semiparametric regression is simultaneous equation regression model and fusion of parametric and nonparametric model. The regression model comprise several models and each model has two components, parametric and nonparametric. The used model has linear function as parametric and polynomial truncated spline as nonparametric component. The model can handle both linearity and nonlinearity relationship between response and the sets of predictor variables. The aim of this paper is to demonstrate the application of the regression model for modeling of effect of regional socio-economic on use of information technology. More specific, the response variables are percentage of households has access to internet and percentage of households has personal computer. Then, predictor variables are percentage of literacy people, percentage of electrification and percentage of economic growth. Based on identification of the relationship between response and predictor variable, economic growth is treated as nonparametric predictor and the others are parametric predictors. The result shows that the multiresponse semiparametric regression can be applied well as indicate by the high coefficient determination, 90 percent.

  14. Multi-Model Assessment of Trends and Variability in Terrestrial Carbon Uptake in India

    Science.gov (United States)

    Rao, A. S.; Bala, G.; Ravindranath, N. H.

    2015-12-01

    Indian terrestrial ecosystem exhibits large temporal and spatial variability in carbon sources and sinks due to its monsoon based climate system, diverse land use and land cover distribution and cultural practices. In this study, a multi-model based assessment is made to study the trends and variability in the land carbon uptake for India over the 20th century. Data from nine models which are a part of a recent land surface model intercomparison project called TRENDY is used for the study. These models are driven with common forcing data over the period of 1901-2010. Model output variables assessed include: gross primary production (GPP), heterotrophic respiration (Rh), autotrophic respiration (Ra) and net primary production (NPP). The net ecosystem productivity (NEP) for the Indian region was calculated as a difference of NPP and Rh and it was found that NEP for the region indicates an estimated increase in uptake over the century by -0.6 TgC/year per year. NPP for India also shows an increasing trend of 2.03% per decade from 1901-2010. Seasonal variation in the multimodel mean NPP is maximum during the southwest monsoon period (JJA) followed by the post monsoon period (SON) and is attributed to the maximum in rainfall for the region during the months of JJA. To attribute the changes seen in the land carbon variables, influence of climatic drivers such as precipitation, temperature and remote influences of large scale phenomenon such as ENSO on the land carbon of the region are also estimated in the study. It is found that although changes in precipitation shows a good correlation to the changes seen in NEP, remote drivers like ENSO do not have much effect on them. The Net Ecosystem Exchange is calculated with the inclusion of the land use change flux and fire flux from the models. NEE suggests that the region behaves as a small sink for carbon with an net uptake of 5 GtC over the past hundred years.

  15. Functionally relevant climate variables for arid lands: Aclimatic water deficit approach for modelling desert shrub distributions

    Science.gov (United States)

    Thomas E. Dilts; Peter J. Weisberg; Camie M. Dencker; Jeanne C. Chambers

    2015-01-01

    We have three goals. (1) To develop a suite of functionally relevant climate variables for modelling vegetation distribution on arid and semi-arid landscapes of the Great Basin, USA. (2) To compare the predictive power of vegetation distribution models based on mechanistically proximate factors (water deficit variables) and factors that are more mechanistically removed...

  16. Model Parameter Variability for Enhanced Anaerobic Bioremediation of DNAPL Source Zones

    Science.gov (United States)

    Mao, X.; Gerhard, J. I.; Barry, D. A.

    2005-12-01

    The objective of the Source Area Bioremediation (SABRE) project, an international collaboration of twelve companies, two government agencies and three research institutions, is to evaluate the performance of enhanced anaerobic bioremediation for the treatment of chlorinated ethene source areas containing dense, non-aqueous phase liquids (DNAPL). This 4-year, 5.7 million dollars research effort focuses on a pilot-scale demonstration of enhanced bioremediation at a trichloroethene (TCE) DNAPL field site in the United Kingdom, and includes a significant program of laboratory and modelling studies. Prior to field implementation, a large-scale, multi-laboratory microcosm study was performed to determine the optimal system properties to support dehalogenation of TCE in site soil and groundwater. This statistically-based suite of experiments measured the influence of key variables (electron donor, nutrient addition, bioaugmentation, TCE concentration and sulphate concentration) in promoting the reductive dechlorination of TCE to ethene. As well, a comprehensive biogeochemical numerical model was developed for simulating the anaerobic dehalogenation of chlorinated ethenes. An appropriate (reduced) version of this model was combined with a parameter estimation method based on fitting of the experimental results. Each of over 150 individual microcosm calibrations involved matching predicted and observed time-varying concentrations of all chlorinated compounds. This study focuses on an analysis of this suite of fitted model parameter values. This includes determining the statistical correlation between parameters typically employed in standard Michaelis-Menten type rate descriptions (e.g., maximum dechlorination rates, half-saturation constants) and the key experimental variables. The analysis provides insight into the degree to which aqueous phase TCE and cis-DCE inhibit dechlorination of less-chlorinated compounds. Overall, this work provides a database of the numerical

  17. Developing Baltic cod recruitment models II : Incorporation of environmental variability and species interaction

    DEFF Research Database (Denmark)

    Köster, Fritz; Hinrichsen, H.H.; St. John, Michael

    2001-01-01

    We investigate whether a process-oriented approach based on the results of field, laboratory, and modelling studies can be used to develop a stock-environment-recruitment model for Central Baltic cod (Gadus morhua). Based on exploratory statistical analysis, significant variables influencing...... affecting survival of eggs, predation by clupeids on eggs, larval transport, and cannibalism. Results showed that recruitment in the most important spawning area, the Bornholm Basin, during 1976-1995 was related to egg production; however, other factors affecting survival of the eggs (oxygen conditions......, predation) were also significant and when incorporated explained 69% of the variation in 0-group recruitment. In other spawning areas, variable hydrographic conditions did not allow for regular successful egg development. Hence, relatively simple models proved sufficient to predict recruitment of 0-group...

  18. Phylogenetic tree reconstruction accuracy and model fit when proportions of variable sites change across the tree.

    Science.gov (United States)

    Shavit Grievink, Liat; Penny, David; Hendy, Michael D; Holland, Barbara R

    2010-05-01

    Commonly used phylogenetic models assume a homogeneous process through time in all parts of the tree. However, it is known that these models can be too simplistic as they do not account for nonhomogeneous lineage-specific properties. In particular, it is now widely recognized that as constraints on sequences evolve, the proportion and positions of variable sites can vary between lineages causing heterotachy. The extent to which this model misspecification affects tree reconstruction is still unknown. Here, we evaluate the effect of changes in the proportions and positions of variable sites on model fit and tree estimation. We consider 5 current models of nucleotide sequence evolution in a Bayesian Markov chain Monte Carlo framework as well as maximum parsimony (MP). We show that for a tree with 4 lineages where 2 nonsister taxa undergo a change in the proportion of variable sites tree reconstruction under the best-fitting model, which is chosen using a relative test, often results in the wrong tree. In this case, we found that an absolute test of model fit is a better predictor of tree estimation accuracy. We also found further evidence that MP is not immune to heterotachy. In addition, we show that increased sampling of taxa that have undergone a change in proportion and positions of variable sites is critical for accurate tree reconstruction.

  19. A Poisson regression approach to model monthly hail occurrence in Northern Switzerland using large-scale environmental variables

    Science.gov (United States)

    Madonna, Erica; Ginsbourger, David; Martius, Olivia

    2018-05-01

    In Switzerland, hail regularly causes substantial damage to agriculture, cars and infrastructure, however, little is known about its long-term variability. To study the variability, the monthly number of days with hail in northern Switzerland is modeled in a regression framework using large-scale predictors derived from ERA-Interim reanalysis. The model is developed and verified using radar-based hail observations for the extended summer season (April-September) in the period 2002-2014. The seasonality of hail is explicitly modeled with a categorical predictor (month) and monthly anomalies of several large-scale predictors are used to capture the year-to-year variability. Several regression models are applied and their performance tested with respect to standard scores and cross-validation. The chosen model includes four predictors: the monthly anomaly of the two meter temperature, the monthly anomaly of the logarithm of the convective available potential energy (CAPE), the monthly anomaly of the wind shear and the month. This model well captures the intra-annual variability and slightly underestimates its inter-annual variability. The regression model is applied to the reanalysis data back in time to 1980. The resulting hail day time series shows an increase of the number of hail days per month, which is (in the model) related to an increase in temperature and CAPE. The trend corresponds to approximately 0.5 days per month per decade. The results of the regression model have been compared to two independent data sets. All data sets agree on the sign of the trend, but the trend is weaker in the other data sets.

  20. Viscous dark energy models with variable G and Λ

    International Nuclear Information System (INIS)

    Arbab, Arbab I.

    2008-01-01

    We consider a cosmological model with bulk viscosity η and variable cosmological A ∝ ρ -α , alpha = const and gravitational G constants. The model exhibits many interesting cosmological features. Inflation proceeds due to the presence of bulk viscosity and dark energy without requiring the equation of state p=-ρ. During the inflationary era the energy density ρ does not remain constant, as in the de-Sitter type. Moreover, the cosmological and gravitational constants increase exponentially with time, whereas the energy density and viscosity decrease exponentially with time. The rate of mass creation during inflation is found to be very huge suggesting that all matter in the universe is created during inflation. (author)

  1. Design variables and constraints in fashion store design processes

    DEFF Research Database (Denmark)

    Haug, Anders; Borch Münster, Mia

    2015-01-01

    is to identify the most important store design variables, organise these variables into categories, understand the design constraints between categories, and determine the most influential stakeholders. Design/methodology/approach: – Based on a discussion of existing literature, the paper defines a framework...... into categories, provides an understanding of constraints between categories of variables, and identifies the most influential stakeholders. The paper demonstrates that the fashion store design task can be understood through a system perspective, implying that the store design task becomes a matter of defining......Purpose: – Several frameworks of retail store environment variables exist, but as shown by this paper, they are not particularly well-suited for supporting fashion store design processes. Thus, in order to provide an improved understanding of fashion store design, the purpose of this paper...

  2. Modeling water scarcity over south Asia: Incorporating crop growth and irrigation models into the Variable Infiltration Capacity (VIC) model

    Science.gov (United States)

    Troy, Tara J.; Ines, Amor V. M.; Lall, Upmanu; Robertson, Andrew W.

    2013-04-01

    Large-scale hydrologic models, such as the Variable Infiltration Capacity (VIC) model, are used for a variety of studies, from drought monitoring to projecting the potential impact of climate change on the hydrologic cycle decades in advance. The majority of these models simulates the natural hydrological cycle and neglects the effects of human activities such as irrigation, which can result in streamflow withdrawals and increased evapotranspiration. In some parts of the world, these activities do not significantly affect the hydrologic cycle, but this is not the case in south Asia where irrigated agriculture has a large water footprint. To address this gap, we incorporate a crop growth model and irrigation model into the VIC model in order to simulate the impacts of irrigated and rainfed agriculture on the hydrologic cycle over south Asia (Indus, Ganges, and Brahmaputra basin and peninsular India). The crop growth model responds to climate signals, including temperature and water stress, to simulate the growth of maize, wheat, rice, and millet. For the primarily rainfed maize crop, the crop growth model shows good correlation with observed All-India yields (0.7) with lower correlations for the irrigated wheat and rice crops (0.4). The difference in correlation is because irrigation provides a buffer against climate conditions, so that rainfed crop growth is more tied to climate than irrigated crop growth. The irrigation water demands induce hydrologic water stress in significant parts of the region, particularly in the Indus, with the streamflow unable to meet the irrigation demands. Although rainfall can vary significantly in south Asia, we find that water scarcity is largely chronic due to the irrigation demands rather than being intermittent due to climate variability.

  3. Meta-modeling of occupancy variables and analysis of their impact on energy outcomes of office buildings

    International Nuclear Information System (INIS)

    Wang, Qinpeng; Augenbroe, Godfried; Kim, Ji-Hyun; Gu, Li

    2016-01-01

    Highlights: • A meta-analysis framework for a stochastic characterization of occupancy variables. • Sensitivity ranking of occupancy variability against all other sources of uncertainty. • Sensitivity of occupant presence for building energy consumption is low. • Accurate mean knowledge is sufficient for predicting building energy consumption. • Prediction of peak demand behavior requires stochastic occupancy modeling. - Abstract: Occupants interact with buildings in various ways via their presence (passive effects) and control actions (active effects). Therefore, understanding the influence of occupants is essential if we are to evaluate the performance of a building. In this paper, we model the mean profiles and variability of occupancy variables (presence and actions) separately. We will use a multi-variate Gaussian distribution to generate mean profiles of occupancy variables, while the variability will be represented by a multi-dimensional time series model, within a framework for a meta-analysis that synthesizes occupancy data gathered from a pool of buildings. We then discuss variants of occupancy models with respect to various outcomes of interest such as HVAC energy consumption and peak demand behavior via a sensitivity analysis. Results show that our approach is able to generate stochastic occupancy profiles, requiring minimum additional input from the energy modeler other than standard diversity profiles. Along with the meta-analysis, we enable the generalization of previous research results and statistical inferences to choose occupancy variables for future buildings. The sensitivity analysis shows that for aggregated building energy consumption, occupant presence has a smaller impact compared to lighting and appliance usage. Specifically, being accumulatively 55% wrong with regard to presence, only translates to 2% error in aggregated cooling energy in July and 3.6% error in heating energy in January. Such a finding redirects focus to the

  4. Blackboard architecture and qualitative model in a computer aided assistant designed to define computers for HEP computing

    International Nuclear Information System (INIS)

    Nodarse, F.F.; Ivanov, V.G.

    1991-01-01

    Using BLACKBOARD architecture and qualitative model, an expert systm was developed to assist the use in defining the computers method for High Energy Physics computing. The COMEX system requires an IBM AT personal computer or compatible with than 640 Kb RAM and hard disk. 5 refs.; 9 figs

  5. Toward modular biological models: defining analog modules based on referent physiological mechanisms.

    Science.gov (United States)

    Petersen, Brenden K; Ropella, Glen E P; Hunt, C Anthony

    2014-08-16

    Currently, most biomedical models exist in isolation. It is often difficult to reuse or integrate models or their components, in part because they are not modular. Modular components allow the modeler to think more deeply about the role of the model and to more completely address a modeling project's requirements. In particular, modularity facilitates component reuse and model integration for models with different use cases, including the ability to exchange modules during or between simulations. The heterogeneous nature of biology and vast range of wet-lab experimental platforms call for modular models designed to satisfy a variety of use cases. We argue that software analogs of biological mechanisms are reasonable candidates for modularization. Biomimetic software mechanisms comprised of physiomimetic mechanism modules offer benefits that are unique or especially important to multi-scale, biomedical modeling and simulation. We present a general, scientific method of modularizing mechanisms into reusable software components that we call physiomimetic mechanism modules (PMMs). PMMs utilize parametric containers that partition and expose state information into physiologically meaningful groupings. To demonstrate, we modularize four pharmacodynamic response mechanisms adapted from an in silico liver (ISL). We verified the modularization process by showing that drug clearance results from in silico experiments are identical before and after modularization. The modularized ISL achieves validation targets drawn from propranolol outflow profile data. In addition, an in silico hepatocyte culture (ISHC) is created. The ISHC uses the same PMMs and required no refactoring. The ISHC achieves validation targets drawn from propranolol intrinsic clearance data exhibiting considerable between-lab variability. The data used as validation targets for PMMs originate from both in vitro to in vivo experiments exhibiting large fold differences in time scale. This report demonstrates

  6. Initial CGE Model Results Summary Exogenous and Endogenous Variables Tests

    Energy Technology Data Exchange (ETDEWEB)

    Edwards, Brian Keith [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Boero, Riccardo [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Rivera, Michael Kelly [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-08-07

    The following discussion presents initial results of tests of the most recent version of the National Infrastructure Simulation and Analysis Center Dynamic Computable General Equilibrium (CGE) model developed by Los Alamos National Laboratory (LANL). The intent of this is to test and assess the model’s behavioral properties. The test evaluated whether the predicted impacts are reasonable from a qualitative perspective. This issue is whether the predicted change, be it an increase or decrease in other model variables, is consistent with prior economic intuition and expectations about the predicted change. One of the purposes of this effort is to determine whether model changes are needed in order to improve its behavior qualitatively and quantitatively.

  7. COMPARISON OF KEPLER PHOTOMETRIC VARIABILITY WITH THE SUN ON DIFFERENT TIMESCALES

    International Nuclear Information System (INIS)

    Basri, Gibor; Walkowicz, Lucianne M.; Reiners, Ansgar

    2013-01-01

    We utilize Kepler data to study the precision differential photometric variability of solar-type and cooler stars at different timescales, ranging from half an hour to three months. We define a diagnostic that characterizes the median differential intensity change between data bins of a given timescale. We apply the same diagnostics to Solar and Heliospheric Observatory data that has been rendered comparable to Kepler. The Sun exhibits similar photometric variability on all timescales as comparable solar-type stars in the Kepler field. The previously defined photometric ''range'' serves as our activity proxy (driven by starspot coverage). We revisit the fraction of comparable stars in the Kepler field that are more active than the Sun. The exact active fraction depends on what is meant by ''more active than the Sun'' and on the magnitude limit of the sample of stars considered. This active fraction is between a quarter and a third (depending on the timescale). We argue that a reliable result requires timescales of half a day or longer and stars brighter than M Kep of 14, otherwise non-stellar noise distorts it. We also analyze main sequence stars grouped by temperature from 6500 to 3500 K. As one moves to cooler stars, the active fraction of stars becomes steadily larger (greater than 90% for early M dwarfs). The Sun is a good photometric model at all timescales for those cooler stars that have long-term variability within the span of solar variability.

  8. Quantifying uncertainty, variability and likelihood for ordinary differential equation models

    LENUS (Irish Health Repository)

    Weisse, Andrea Y

    2010-10-28

    Abstract Background In many applications, ordinary differential equation (ODE) models are subject to uncertainty or variability in initial conditions and parameters. Both, uncertainty and variability can be quantified in terms of a probability density function on the state and parameter space. Results The partial differential equation that describes the evolution of this probability density function has a form that is particularly amenable to application of the well-known method of characteristics. The value of the density at some point in time is directly accessible by the solution of the original ODE extended by a single extra dimension (for the value of the density). This leads to simple methods for studying uncertainty, variability and likelihood, with significant advantages over more traditional Monte Carlo and related approaches especially when studying regions with low probability. Conclusions While such approaches based on the method of characteristics are common practice in other disciplines, their advantages for the study of biological systems have so far remained unrecognized. Several examples illustrate performance and accuracy of the approach and its limitations.

  9. Demographic models reveal the shape of density dependence for a specialist insect herbivore on variable host plants.

    Science.gov (United States)

    Miller, Tom E X

    2007-07-01

    1. It is widely accepted that density-dependent processes play an important role in most natural populations. However, persistent challenges in our understanding of density-dependent population dynamics include evaluating the shape of the relationship between density and demographic rates (linear, concave, convex), and identifying extrinsic factors that can mediate this relationship. 2. I studied the population dynamics of the cactus bug Narnia pallidicornis on host plants (Opuntia imbricata) that varied naturally in relative reproductive effort (RRE, the proportion of meristems allocated to reproduction), an important plant quality trait. I manipulated per-plant cactus bug densities, quantified subsequent dynamics, and fit stage-structured models to the experimental data to ask if and how density influences demographic parameters. 3. In the field experiment, I found that populations with variable starting densities quickly converged upon similar growth trajectories. In the model-fitting analyses, the data strongly supported a model that defined the juvenile cactus bug retention parameter (joint probability of surviving and not dispersing) as a nonlinear decreasing function of density. The estimated shape of this relationship shifted from concave to convex with increasing host-plant RRE. 4. The results demonstrate that host-plant traits are critical sources of variation in the strength and shape of density dependence in insects, and highlight the utility of integrated experimental-theoretical approaches for identifying processes underlying patterns of change in natural populations.

  10. A definability theorem for first order logic

    NARCIS (Netherlands)

    Butz, C.; Moerdijk, I.

    1997-01-01

    In this paper we will present a definability theorem for first order logic This theorem is very easy to state and its proof only uses elementary tools To explain the theorem let us first observe that if M is a model of a theory T in a language L then clearly any definable subset S M ie a subset S

  11. Semantic Model of Variability and Capabilities of IoT Applications for Embedded Software Ecosystems

    DEFF Research Database (Denmark)

    Tomlein, Matus; Grønbæk, Kaj

    2016-01-01

    reasoning to resolve context requirements. We present the implications on the architecture of the ecosystem and the concepts defined in the model. Finally, we discuss the evaluation of the model and its benefits and liabilities. Although the approach results in more complex descriptions of applications, we...

  12. An empirical model for independent control of variable speed refrigeration system

    International Nuclear Information System (INIS)

    Li Hua; Jeong, Seok-Kwon; Yoon, Jung-In; You, Sam-Sang

    2008-01-01

    This paper deals with an empirical dynamic model for decoupling control of the variable speed refrigeration system (VSRS). To cope with inherent complexity and nonlinearity in system dynamics, the model parameters are first obtained based on experimental data. In the study, the dynamic characteristics of indoor temperature and superheat are assumed to be first-order model with time delay. While the compressor frequency and opening angle of electronic expansion valve are varying, the indoor temperature and the superheat exhibit interfering characteristics each other in the VSRS. Thus, each decoupling model has been proposed to eliminate such interference. Finally, the experiment and simulation results indicate that the proposed model offers more tractable means for describing the actual VSRS comparing to other models currently available

  13. Up-scaling of multi-variable flood loss models from objects to land use units at the meso-scale

    Directory of Open Access Journals (Sweden)

    H. Kreibich

    2016-05-01

    Full Text Available Flood risk management increasingly relies on risk analyses, including loss modelling. Most of the flood loss models usually applied in standard practice have in common that complex damaging processes are described by simple approaches like stage-damage functions. Novel multi-variable models significantly improve loss estimation on the micro-scale and may also be advantageous for large-scale applications. However, more input parameters also reveal additional uncertainty, even more in upscaling procedures for meso-scale applications, where the parameters need to be estimated on a regional area-wide basis. To gain more knowledge about challenges associated with the up-scaling of multi-variable flood loss models the following approach is applied: Single- and multi-variable micro-scale flood loss models are up-scaled and applied on the meso-scale, namely on basis of ATKIS land-use units. Application and validation is undertaken in 19 municipalities, which were affected during the 2002 flood by the River Mulde in Saxony, Germany by comparison to official loss data provided by the Saxon Relief Bank (SAB.In the meso-scale case study based model validation, most multi-variable models show smaller errors than the uni-variable stage-damage functions. The results show the suitability of the up-scaling approach, and, in accordance with micro-scale validation studies, that multi-variable models are an improvement in flood loss modelling also on the meso-scale. However, uncertainties remain high, stressing the importance of uncertainty quantification. Thus, the development of probabilistic loss models, like BT-FLEMO used in this study, which inherently provide uncertainty information are the way forward.

  14. Uni- and multi-variable modelling of flood losses: experiences gained from the Secchia river inundation event.

    Science.gov (United States)

    Carisi, Francesca; Domeneghetti, Alessio; Kreibich, Heidi; Schröter, Kai; Castellarin, Attilio

    2017-04-01

    Flood risk is function of flood hazard and vulnerability, therefore its accurate assessment depends on a reliable quantification of both factors. The scientific literature proposes a number of objective and reliable methods for assessing flood hazard, yet it highlights a limited understanding of the fundamental damage processes. Loss modelling is associated with large uncertainty which is, among other factors, due to a lack of standard procedures; for instance, flood losses are often estimated based on damage models derived in completely different contexts (i.e. different countries or geographical regions) without checking its applicability, or by considering only one explanatory variable (i.e. typically water depth). We consider the Secchia river flood event of January 2014, when a sudden levee-breach caused the inundation of nearly 200 km2 in Northern Italy. In the aftermath of this event, local authorities collected flood loss data, together with additional information on affected private households and industrial activities (e.g. buildings surface and economic value, number of company's employees and others). Based on these data we implemented and compared a quadratic-regression damage function, with water depth as the only explanatory variable, and a multi-variable model that combines multiple regression trees and considers several explanatory variables (i.e. bagging decision trees). Our results show the importance of data collection revealing that (1) a simple quadratic regression damage function based on empirical data from the study area can be significantly more accurate than literature damage-models derived for a different context and (2) multi-variable modelling may outperform the uni-variable approach, yet it is more difficult to develop and apply due to a much higher demand of detailed data.

  15. Hydroclimate variability in Scandinavia over the last millennium - insights from a climate model-proxy data comparison

    Science.gov (United States)

    Seftigen, Kristina; Goosse, Hugues; Klein, Francois; Chen, Deliang

    2017-12-01

    The integration of climate proxy information with general circulation model (GCM) results offers considerable potential for deriving greater understanding of the mechanisms underlying climate variability, as well as unique opportunities for out-of-sample evaluations of model performance. In this study, we combine insights from a new tree-ring hydroclimate reconstruction from Scandinavia with projections from a suite of forced transient simulations of the last millennium and historical intervals from the CMIP5 and PMIP3 archives. Model simulations and proxy reconstruction data are found to broadly agree on the modes of atmospheric variability that produce droughts-pluvials in the region. Despite these dynamical similarities, large differences between simulated and reconstructed hydroclimate time series remain. We find that the GCM-simulated multi-decadal and/or longer hydroclimate variability is systematically smaller than the proxy-based estimates, whereas the dominance of GCM-simulated high-frequency components of variability is not reflected in the proxy record. Furthermore, the paleoclimate evidence indicates in-phase coherencies between regional hydroclimate and temperature on decadal timescales, i.e., sustained wet periods have often been concurrent with warm periods and vice versa. The CMIP5-PMIP3 archive suggests, however, out-of-phase coherencies between the two variables in the last millennium. The lack of adequate understanding of mechanisms linking temperature and moisture supply on longer timescales has serious implications for attribution and prediction of regional hydroclimate changes. Our findings stress the need for further paleoclimate data-model intercomparison efforts to expand our understanding of the dynamics of hydroclimate variability and change, to enhance our ability to evaluate climate models, and to provide a more comprehensive view of future drought and pluvial risks.

  16. Structural identifiability of cyclic graphical models of biological networks with latent variables.

    Science.gov (United States)

    Wang, Yulin; Lu, Na; Miao, Hongyu

    2016-06-13

    Graphical models have long been used to describe biological networks for a variety of important tasks such as the determination of key biological parameters, and the structure of graphical model ultimately determines whether such unknown parameters can be unambiguously obtained from experimental observations (i.e., the identifiability problem). Limited by resources or technical capacities, complex biological networks are usually partially observed in experiment, which thus introduces latent variables into the corresponding graphical models. A number of previous studies have tackled the parameter identifiability problem for graphical models such as linear structural equation models (SEMs) with or without latent variables. However, the limited resolution and efficiency of existing approaches necessarily calls for further development of novel structural identifiability analysis algorithms. An efficient structural identifiability analysis algorithm is developed in this study for a broad range of network structures. The proposed method adopts the Wright's path coefficient method to generate identifiability equations in forms of symbolic polynomials, and then converts these symbolic equations to binary matrices (called identifiability matrix). Several matrix operations are introduced for identifiability matrix reduction with system equivalency maintained. Based on the reduced identifiability matrices, the structural identifiability of each parameter is determined. A number of benchmark models are used to verify the validity of the proposed approach. Finally, the network module for influenza A virus replication is employed as a real example to illustrate the application of the proposed approach in practice. The proposed approach can deal with cyclic networks with latent variables. The key advantage is that it intentionally avoids symbolic computation and is thus highly efficient. Also, this method is capable of determining the identifiability of each single parameter and

  17. Using variable combination population analysis for variable selection in multivariate calibration.

    Science.gov (United States)

    Yun, Yong-Huan; Wang, Wei-Ting; Deng, Bai-Chuan; Lai, Guang-Bi; Liu, Xin-bo; Ren, Da-Bing; Liang, Yi-Zeng; Fan, Wei; Xu, Qing-Song

    2015-03-03

    Variable (wavelength or feature) selection techniques have become a critical step for the analysis of datasets with high number of variables and relatively few samples. In this study, a novel variable selection strategy, variable combination population analysis (VCPA), was proposed. This strategy consists of two crucial procedures. First, the exponentially decreasing function (EDF), which is the simple and effective principle of 'survival of the fittest' from Darwin's natural evolution theory, is employed to determine the number of variables to keep and continuously shrink the variable space. Second, in each EDF run, binary matrix sampling (BMS) strategy that gives each variable the same chance to be selected and generates different variable combinations, is used to produce a population of subsets to construct a population of sub-models. Then, model population analysis (MPA) is employed to find the variable subsets with the lower root mean squares error of cross validation (RMSECV). The frequency of each variable appearing in the best 10% sub-models is computed. The higher the frequency is, the more important the variable is. The performance of the proposed procedure was investigated using three real NIR datasets. The results indicate that VCPA is a good variable selection strategy when compared with four high performing variable selection methods: genetic algorithm-partial least squares (GA-PLS), Monte Carlo uninformative variable elimination by PLS (MC-UVE-PLS), competitive adaptive reweighted sampling (CARS) and iteratively retains informative variables (IRIV). The MATLAB source code of VCPA is available for academic research on the website: http://www.mathworks.com/matlabcentral/fileexchange/authors/498750. Copyright © 2015 Elsevier B.V. All rights reserved.

  18. Modelos de regresión para variables expresadas como una proporción continua Regression models for variables expressed as a continuous proportion

    Directory of Open Access Journals (Sweden)

    Aarón Salinas-Rodríguez

    2006-10-01

    the Public Health field. MATERIAL AND METHODS: From the National Reproductive Health Survey performed in 2003, the proportion of individual coverage in the family planning program -proposed in one study carried out in the National Institute of Public Health in Cuernavaca, Morelos, Mexico (2005- was modeled using the Normal, Gamma, Beta and quasi-likelihood regression models. The Akaike Information Criterion (AIC proposed by McQuarrie and Tsai was used to define the best model. Then, using a simulation (Monte Carlo/Markov Chains approach a variable with a Beta distribution was generated to evaluate the behavior of the 4 models while varying the sample size from 100 to 18 000 observations. RESULTS: Results showed that the best statistical option for the analysis of continuous proportions was the Beta regression model, since its assumptions are easily accomplished and because it had the lowest AIC value. Simulation evidenced that while the sample size increases the Gamma, and even more so the quasi-likelihood, models come significantly close to the Beta regression model. CONCLUSIONS: The use of parametric Beta regression is highly recommended to model continuous proportions and the normal model should be avoided. If the sample size is large enough, the use of quasi-likelihood model represents a good alternative.

  19. Relations between altered stramflow variability and fish assemblages in Eastern USA streams

    Science.gov (United States)

    Meador, Michael R.; Carlisle, Daren M.

    2012-01-01

    Although altered streamflow has been implicated as a major factor affecting fish assemblages, understanding the extent of streamflow alteration has required quantifying attributes of the natural flow regime. We used predictive models to quantify deviation from expected natural streamflow variability for streams in the eastern USA. Sites with >25% change in mean daily streamflow variability compared with what would be expected in a minimally disturbed environment were defined as having altered streamflow variability, based on the 10th and 90th percentiles of the distribution of streamflow variability at 1279 hydrological reference sites. We also used predictive models to assess fish assemblage condition and native species loss based on the proportion of expected native fish species that were observed. Of the 97 sites, 49 (50.5%) were classified as altered with reduced streamflow variability, whereas no sites had increased streamflow variability. Reduced streamflow variability was related to a 35% loss in native fish species, on average, and a >50% loss of species with a preference for riffle habitats. Conditional probability analysis indicated that the probability of fish assemblage impairment increased as the severity of altered streamflow variability increased. Reservoir storage capacity and wastewater discharges were important predictors of reduced streamflow variability as revealed by random forest analysis. Management and conservation of streams will require careful consideration of natural streamflow variation and potential factors contributing to altered streamflow within the entire watershed to limit the loss of critical stream habitats and fish species uniquely adapted to live in those habitats.

  20. Maximum Lateness Scheduling on Two-Person Cooperative Games with Variable Processing Times and Common Due Date

    OpenAIRE

    Liu, Peng; Wang, Xiaoli

    2017-01-01

    A new maximum lateness scheduling model in which both cooperative games and variable processing times exist simultaneously is considered in this paper. The job variable processing time is described by an increasing or a decreasing function dependent on the position of a job in the sequence. Two persons have to cooperate in order to process a set of jobs. Each of them has a single machine and their processing cost is defined as the minimum value of maximum lateness. All jobs have a common due ...

  1. Context-invariant quasi hidden variable (qHV) modelling of all joint von Neumann measurements for an arbitrary Hilbert space

    International Nuclear Information System (INIS)

    Loubenets, Elena R.

    2015-01-01

    We prove the existence for each Hilbert space of the two new quasi hidden variable (qHV) models, statistically noncontextual and context-invariant, reproducing all the von Neumann joint probabilities via non-negative values of real-valued measures and all the quantum product expectations—via the qHV (classical-like) average of the product of the corresponding random variables. In a context-invariant model, a quantum observable X can be represented by a variety of random variables satisfying the functional condition required in quantum foundations but each of these random variables equivalently models X under all joint von Neumann measurements, regardless of their contexts. The proved existence of this model negates the general opinion that, in terms of random variables, the Hilbert space description of all the joint von Neumann measurements for dimH≥3 can be reproduced only contextually. The existence of a statistically noncontextual qHV model, in particular, implies that every N-partite quantum state admits a local quasi hidden variable model introduced in Loubenets [J. Math. Phys. 53, 022201 (2012)]. The new results of the present paper point also to the generality of the quasi-classical probability model proposed in Loubenets [J. Phys. A: Math. Theor. 45, 185306 (2012)

  2. The Role of Auxiliary Variables in Deterministic and Deterministic-Stochastic Spatial Models of Air Temperature in Poland

    Science.gov (United States)

    Szymanowski, Mariusz; Kryza, Maciej

    2017-02-01

    Our study examines the role of auxiliary variables in the process of spatial modelling and mapping of climatological elements, with air temperature in Poland used as an example. The multivariable algorithms are the most frequently applied for spatialization of air temperature, and their results in many studies are proved to be better in comparison to those obtained by various one-dimensional techniques. In most of the previous studies, two main strategies were used to perform multidimensional spatial interpolation of air temperature. First, it was accepted that all variables significantly correlated with air temperature should be incorporated into the model. Second, it was assumed that the more spatial variation of air temperature was deterministically explained, the better was the quality of spatial interpolation. The main goal of the paper was to examine both above-mentioned assumptions. The analysis was performed using data from 250 meteorological stations and for 69 air temperature cases aggregated on different levels: from daily means to 10-year annual mean. Two cases were considered for detailed analysis. The set of potential auxiliary variables covered 11 environmental predictors of air temperature. Another purpose of the study was to compare the results of interpolation given by various multivariable methods using the same set of explanatory variables. Two regression models: multiple linear (MLR) and geographically weighted (GWR) method, as well as their extensions to the regression-kriging form, MLRK and GWRK, respectively, were examined. Stepwise regression was used to select variables for the individual models and the cross-validation method was used to validate the results with a special attention paid to statistically significant improvement of the model using the mean absolute error (MAE) criterion. The main results of this study led to rejection of both assumptions considered. Usually, including more than two or three of the most significantly

  3. The role of updraft velocity in temporal variability of cloud hydrometeor number

    Science.gov (United States)

    Sullivan, Sylvia; Nenes, Athanasios; Lee, Dong Min; Oreopoulos, Lazaros

    2016-04-01

    Significant effort has been dedicated to incorporating direct aerosol-cloud links, through parameterization of liquid droplet activation and ice crystal nucleation, within climate models. This significant accomplishment has generated the need for understanding which parameters affecting hydrometer formation drives its variability in coupled climate simulations, as it provides the basis for optimal parameter estimation as well as robust comparison with data, and other models. Sensitivity analysis alone does not address this issue, given that the importance of each parameter for hydrometer formation depends on its variance and sensitivity. To address the above issue, we develop and use a series of attribution metrics defined with adjoint sensitivities to attribute the temporal variability in droplet and crystal number to important aerosol and dynamical parameters. This attribution analysis is done both for the NASA Global Modeling and Assimilation Office Goddard Earth Observing System Model, Version 5 and the National Center for Atmospheric Research Community Atmosphere Model Version 5.1. Within the GEOS simulation, up to 48% of temporal variability in output ice crystal number and 61% in droplet number can be attributed to input updraft velocity fluctuations, while for the CAM simulation, they explain as much as 89% of the ice crystal number variability. This above results suggest that vertical velocity in both model frameworks is seen to be a very important (or dominant) driver of hydrometer variability. Yet, observations of vertical velocity are seldomly available (or used) to evaluate the vertical velocities in simulations; this strikingly contrasts the amount and quality of data available for aerosol-related parameters. Consequentially, there is a strong need for retrievals or measurements of vertical velocity for addressing this important knowledge gap that requires a significant investment and effort by the atmospheric community. The attribution metrics as a

  4. Modified Regression Correlation Coefficient for Poisson Regression Model

    Science.gov (United States)

    Kaengthong, Nattacha; Domthong, Uthumporn

    2017-09-01

    This study gives attention to indicators in predictive power of the Generalized Linear Model (GLM) which are widely used; however, often having some restrictions. We are interested in regression correlation coefficient for a Poisson regression model. This is a measure of predictive power, and defined by the relationship between the dependent variable (Y) and the expected value of the dependent variable given the independent variables [E(Y|X)] for the Poisson regression model. The dependent variable is distributed as Poisson. The purpose of this research was modifying regression correlation coefficient for Poisson regression model. We also compare the proposed modified regression correlation coefficient with the traditional regression correlation coefficient in the case of two or more independent variables, and having multicollinearity in independent variables. The result shows that the proposed regression correlation coefficient is better than the traditional regression correlation coefficient based on Bias and the Root Mean Square Error (RMSE).

  5. Xeno-Free and Defined Human Embryonic Stem Cell-Derived Retinal Pigment Epithelial Cells Functionally Integrate in a Large-Eyed Preclinical Model

    Directory of Open Access Journals (Sweden)

    Alvaro Plaza Reyes

    2016-01-01

    Full Text Available Human embryonic stem cell (hESC-derived retinal pigment epithelial (RPE cells could replace lost tissue in geographic atrophy (GA but efficacy has yet to be demonstrated in a large-eyed model. Also, production of hESC-RPE has not yet been achieved in a xeno-free and defined manner, which is critical for clinical compliance and reduced immunogenicity. Here we describe an effective differentiation methodology using human laminin-521 matrix with xeno-free and defined medium. Differentiated cells exhibited characteristics of native RPE including morphology, pigmentation, marker expression, monolayer integrity, and polarization together with phagocytic activity. Furthermore, we established a large-eyed GA model that allowed in vivo imaging of hESC-RPE and host retina. Cells transplanted in suspension showed long-term integration and formed polarized monolayers exhibiting phagocytic and photoreceptor rescue capacity. We have developed a xeno-free and defined hESC-RPE differentiation method and present evidence of functional integration of clinically compliant hESC-RPE in a large-eyed disease model.

  6. X-Ray Quasi-periodic Oscillations in the Lense–Thirring Precession Model. I. Variability of Relativistic Continuum

    Science.gov (United States)

    You, Bei; Bursa, Michal; Życki, Piotr T.

    2018-05-01

    We develop a Monte Carlo code to compute the Compton-scattered X-ray flux arising from a hot inner flow that undergoes Lense–Thirring precession. The hot flow intercepts seed photons from an outer truncated thin disk. A fraction of the Comptonized photons will illuminate the disk, and the reflected/reprocessed photons will contribute to the observed spectrum. The total spectrum, including disk thermal emission, hot flow Comptonization, and disk reflection, is modeled within the framework of general relativity, taking light bending and gravitational redshift into account. The simulations are performed in the context of the Lense–Thirring precession model for the low-frequency quasi-periodic oscillations, so the inner flow is assumed to precess, leading to periodic modulation of the emitted radiation. In this work, we concentrate on the energy-dependent X-ray variability of the model and, in particular, on the evolution of the variability during the spectral transition from hard to soft state, which is implemented by the decrease of the truncation radius of the outer disk toward the innermost stable circular orbit. In the hard state, where the Comptonizing flow is geometrically thick, the Comptonization is weakly variable with a fractional variability amplitude of ≤10% in the soft state, where the Comptonizing flow is cooled down and thus becomes geometrically thin, the fractional variability of the Comptonization is highly variable, increasing with photon energy. The fractional variability of the reflection increases with energy, and the reflection emission for low spin is counterintuitively more variable than the one for high spin.

  7. Defining Requirements and Applying Information Modeling for Protecting Enterprise Assets

    Science.gov (United States)

    Fortier, Stephen C.; Volk, Jennifer H.

    The advent of terrorist threats has heightened local, regional, and national governments' interest in emergency response and disaster preparedness. The threat of natural disasters also challenges emergency responders to act swiftly and in a coordinated fashion. When a disaster occurs, an ad hoc coalition of pre-planned groups usually forms to respond to the incident. History has shown that these “system of systems” do not interoperate very well. Communications between fire, police and rescue components either do not work or are inefficient. Government agencies, non-governmental organizations (NGOs), and private industry use a wide array of software platforms for managing data about emergency conditions, resources and response activities. Most of these are stand-alone systems with very limited capability for data sharing with other agencies or other levels of government. Information technology advances have facilitated the movement towards an integrated and coordinated approach to emergency management. Other communication mechanisms, such as video teleconferencing, digital television and radio broadcasting, are being utilized to combat the challenges of emergency information exchange. Recent disasters, such as Hurricane Katrina and the tsunami in Indonesia, have illuminated the weaknesses in emergency response. This paper will discuss the need for defining requirements for components of ad hoc coalitions which are formed to respond to disasters. A goal of our effort was to develop a proof of concept that applying information modeling to the business processes used to protect and mitigate potential loss of an enterprise was feasible. These activities would be modeled both pre- and post-incident.

  8. Extraction Methods, Variability Encountered in

    NARCIS (Netherlands)

    Bodelier, P.L.E.; Nelson, K.E.

    2014-01-01

    Synonyms Bias in DNA extractions methods; Variation in DNA extraction methods Definition The variability in extraction methods is defined as differences in quality and quantity of DNA observed using various extraction protocols, leading to differences in outcome of microbial community composition

  9. Defining human mesenchymal stem cell efficacy in vivo

    Directory of Open Access Journals (Sweden)

    Lennon Donald P

    2010-10-01

    Full Text Available Abstract Allogeneic human mesenchymal stem cells (hMSCs can suppress graft versus host disease (GvHD and have profound anti-inflammatory and regenerative capacity in stroke, infarct, spinal cord injury, meniscus regeneration, tendinitis, acute renal failure, and heart disease in human and animal models of disease. There is significant clinical hMSC variability in efficacy and the ultimate response in vivo. The challenge in hMSC based therapy is defining the efficacy of hMSC in vivo. Models which may provide insight into hMSC bioactivity in vivo would provide a means to distinguish hMSCs for clinical utility. hMSC function has been described as both regenerative and trophic through the production of bioactive factors. The regenerative component involves the multi-potentiality of hMSC progenitor differentiation. The secreted factors generated by the hMSCs are milieu and injury specific providing unique niches for responses in vivo. These bioactive factors are anti-scarring, angiogenic, anti-apoptotic as well as regenerative. Further, from an immunological standpoint, hMSC's can avoid host immune response, providing xenographic applications. To study the in vivo immuno-regulatory effectiveness of hMSCs, we used the ovalbumin challenge model of acute asthma. This is a quick 3 week in vivo pulmonary inflammation model with readily accessible ways of measuring effectiveness of hMSCs. Our data show that there is a direct correlation between the traditional ceramic cube score to hMSCs attenuation of cellular recruitment due to ovalbumin challenge. The results from these studies verify the in vivo immuno-modulator effectiveness of hMSCs and support the potential use of the ovalbumin model as an in vivo model of hMSC potency and efficacy. Our data also support future directions toward exploring hMSCs as an alternative therapeutic for the treatment of airway inflammation associated with asthma.

  10. Biomass Scenario Model Documentation: Data and References

    Energy Technology Data Exchange (ETDEWEB)

    Lin, Y.; Newes, E.; Bush, B.; Peterson, S.; Stright, D.

    2013-05-01

    The Biomass Scenario Model (BSM) is a system dynamics model that represents the entire biomass-to-biofuels supply chain, from feedstock to fuel use. The BSM is a complex model that has been used for extensive analyses; the model and its results can be better understood if input data used for initialization and calibration are well-characterized. It has been carefully validated and calibrated against the available data, with data gaps filled in using expert opinion and internally consistent assumed values. Most of the main data sources that feed into the model are recognized as baseline values by the industry. This report documents data sources and references in Version 2 of the BSM (BSM2), which only contains the ethanol pathway, although subsequent versions of the BSM contain multiple conversion pathways. The BSM2 contains over 12,000 total input values, with 506 distinct variables. Many of the variables are opportunities for the user to define scenarios, while others are simply used to initialize a stock, such as the initial number of biorefineries. However, around 35% of the distinct variables are defined by external sources, such as models or reports. The focus of this report is to provide insight into which sources are most influential in each area of the supply chain.

  11. Intraclass Correlation Coefficients in Hierarchical Designs: Evaluation Using Latent Variable Modeling

    Science.gov (United States)

    Raykov, Tenko

    2011-01-01

    Interval estimation of intraclass correlation coefficients in hierarchical designs is discussed within a latent variable modeling framework. A method accomplishing this aim is outlined, which is applicable in two-level studies where participants (or generally lower-order units) are clustered within higher-order units. The procedure can also be…

  12. Analysis of Parking Reliability Guidance of Urban Parking Variable Message Sign System

    OpenAIRE

    Zhenyu Mei; Ye Tian; Dongping Li

    2012-01-01

    Operators of parking guidance and information systems (PGIS) often encounter difficulty in determining when and how to provide reliable car park availability information to drivers. Reliability has become a key factor to ensure the benefits of urban PGIS. The present paper is the first to define the guiding parking reliability of urban parking variable message signs (VMSs). By analyzing the parking choice under guiding and optional parking lots, a guiding parking reliability model was constru...

  13. Optimal speech motor control and token-to-token variability: a Bayesian modeling approach.

    Science.gov (United States)

    Patri, Jean-François; Diard, Julien; Perrier, Pascal

    2015-12-01

    The remarkable capacity of the speech motor system to adapt to various speech conditions is due to an excess of degrees of freedom, which enables producing similar acoustical properties with different sets of control strategies. To explain how the central nervous system selects one of the possible strategies, a common approach, in line with optimal motor control theories, is to model speech motor planning as the solution of an optimality problem based on cost functions. Despite the success of this approach, one of its drawbacks is the intrinsic contradiction between the concept of optimality and the observed experimental intra-speaker token-to-token variability. The present paper proposes an alternative approach by formulating feedforward optimal control in a probabilistic Bayesian modeling framework. This is illustrated by controlling a biomechanical model of the vocal tract for speech production and by comparing it with an existing optimal control model (GEPPETO). The essential elements of this optimal control model are presented first. From them the Bayesian model is constructed in a progressive way. Performance of the Bayesian model is evaluated based on computer simulations and compared to the optimal control model. This approach is shown to be appropriate for solving the speech planning problem while accounting for variability in a principled way.

  14. An artificial pancreas provided a novel model of blood glucose level variability in beagles.

    Science.gov (United States)

    Munekage, Masaya; Yatabe, Tomoaki; Kitagawa, Hiroyuki; Takezaki, Yuka; Tamura, Takahiko; Namikawa, Tsutomu; Hanazaki, Kazuhiro

    2015-12-01

    Although the effects on prognosis of blood glucose level variability have gained increasing attention, it is unclear whether blood glucose level variability itself or the manifestation of pathological conditions that worsen prognosis. Then, previous reports have not been published on variability models of perioperative blood glucose levels. The aim of this study is to establish a novel variability model of blood glucose concentration using an artificial pancreas. We maintained six healthy, male beagles. After anesthesia induction, a 20-G venous catheter was inserted in the right femoral vein and an artificial pancreas (STG-22, Nikkiso Co. Ltd., Tokyo, Japan) was connected for continuous blood glucose monitoring and glucose management. After achieving muscle relaxation, total pancreatectomy was performed. After 1 h of stabilization, automatic blood glucose control was initiated using the artificial pancreas. Blood glucose level varied for 8 h, alternating between the target blood glucose values of 170 and 70 mg/dL. Eight hours later, the experiment was concluded. Total pancreatectomy was performed for 62 ± 13 min. Blood glucose swings were achieved 9.8 ± 2.3 times. The average blood glucose level was 128.1 ± 5.1 mg/dL with an SD of 44.6 ± 3.9 mg/dL. The potassium levels after stabilization and at the end of the experiment were 3.5 ± 0.3 and 3.1 ± 0.5 mmol/L, respectively. In conclusion, the results of the present study demonstrated that an artificial pancreas contributed to the establishment of a novel variability model of blood glucose levels in beagles.

  15. Transient modelling of a natural circulation loop under variable pressure

    International Nuclear Information System (INIS)

    Vianna, Andre L.B.; Faccini, Jose L.H.; Su, Jian; Instituto de Engenharia Nuclear

    2017-01-01

    The objective of the present work is to model the transient operation of a natural circulation loop, which is one-tenth scale in height to a typical Passive Residual Heat Removal system (PRHR) of an Advanced Pressurized Water Nuclear Reactor and was designed to meet the single and two-phase flow similarity criteria to it. The loop consists of a core barrel with electrically heated rods, upper and lower plena interconnected by hot and cold pipe legs to a seven-tube shell heat exchanger of countercurrent design, and an expansion tank with a descending tube. A long transient characterized the loop operation, during which a phenomenon of self-pressurization, without self-regulation of the pressure, was experimentally observed. This represented a unique situation, named natural circulation under variable pressure (NCVP). The self-pressurization was originated in the air trapped in the expansion tank and compressed by the loop water dilatation, as it heated up during each experiment. The mathematical model, initially oriented to the single-phase flow, included the heat capacity of the structure and employed a cubic polynomial approximation for the density, in the buoyancy term calculation. The heater was modelled taking into account the different heat capacities of the heating elements and the heater walls. The heat exchanger was modelled considering the coolant heating, during the heat exchanging process. The self-pressurization was modelled as an isentropic compression of a perfect gas. The whole model was computationally implemented via a set of finite difference equations. The corresponding computational algorithm of solution was of the explicit, marching type, as for the time discretization, in an upwind scheme, regarding the space discretization. The computational program was implemented in MATLAB. Several experiments were carried out in the natural circulation loop, having the coolant flow rate and the heating power as control parameters. The variables used in the

  16. Transient modelling of a natural circulation loop under variable pressure

    Energy Technology Data Exchange (ETDEWEB)

    Vianna, Andre L.B.; Faccini, Jose L.H.; Su, Jian, E-mail: avianna@nuclear.ufrj.br, E-mail: sujian@nuclear.ufrj.br, E-mail: faccini@ien.gov.br [Coordenacao de Pos-Graduacao e Pesquisa de Engenharia (COPPE/UFRJ), Rio de Janeiro, RJ (Brazil). Programa de Engenharia Nuclear; Instituto de Engenharia Nuclear (IEN/CNEN-RJ), Rio de Janeiro, RJ (Brazil). Lab. de Termo-Hidraulica Experimental

    2017-07-01

    The objective of the present work is to model the transient operation of a natural circulation loop, which is one-tenth scale in height to a typical Passive Residual Heat Removal system (PRHR) of an Advanced Pressurized Water Nuclear Reactor and was designed to meet the single and two-phase flow similarity criteria to it. The loop consists of a core barrel with electrically heated rods, upper and lower plena interconnected by hot and cold pipe legs to a seven-tube shell heat exchanger of countercurrent design, and an expansion tank with a descending tube. A long transient characterized the loop operation, during which a phenomenon of self-pressurization, without self-regulation of the pressure, was experimentally observed. This represented a unique situation, named natural circulation under variable pressure (NCVP). The self-pressurization was originated in the air trapped in the expansion tank and compressed by the loop water dilatation, as it heated up during each experiment. The mathematical model, initially oriented to the single-phase flow, included the heat capacity of the structure and employed a cubic polynomial approximation for the density, in the buoyancy term calculation. The heater was modelled taking into account the different heat capacities of the heating elements and the heater walls. The heat exchanger was modelled considering the coolant heating, during the heat exchanging process. The self-pressurization was modelled as an isentropic compression of a perfect gas. The whole model was computationally implemented via a set of finite difference equations. The corresponding computational algorithm of solution was of the explicit, marching type, as for the time discretization, in an upwind scheme, regarding the space discretization. The computational program was implemented in MATLAB. Several experiments were carried out in the natural circulation loop, having the coolant flow rate and the heating power as control parameters. The variables used in the

  17. Polychotomization of continuous variables in regression models based on the overall C index

    Directory of Open Access Journals (Sweden)

    Bax Leon

    2006-12-01

    Full Text Available Abstract Background When developing multivariable regression models for diagnosis or prognosis, continuous independent variables can be categorized to make a prediction table instead of a prediction formula. Although many methods have been proposed to dichotomize prognostic variables, to date there has been no integrated method for polychotomization. The latter is necessary when dichotomization results in too much loss of information or when central values refer to normal states and more dispersed values refer to less preferable states, a situation that is not unusual in medical settings (e.g. body temperature, blood pressure. The goal of our study was to develop a theoretical and practical method for polychotomization. Methods We used the overall discrimination index C, introduced by Harrel, as a measure of the predictive ability of an independent regressor variable and derived a method for polychotomization mathematically. Since the naïve application of our method, like some existing methods, gives rise to positive bias, we developed a parametric method that minimizes this bias and assessed its performance by the use of Monte Carlo simulation. Results The overall C is closely related to the area under the ROC curve and the produced di(polychotomized variable's predictive performance is comparable to the original continuous variable. The simulation shows that the parametric method is essentially unbiased for both the estimates of performance and the cutoff points. Application of our method to the predictor variables of a previous study on rhabdomyolysis shows that it can be used to make probability profile tables that are applicable to the diagnosis or prognosis of individual patient status. Conclusion We propose a polychotomization (including dichotomization method for independent continuous variables in regression models based on the overall discrimination index C and clarified its meaning mathematically. To avoid positive bias in

  18. Biological variability in biomechanical engineering research: Significance and meta-analysis of current modeling practices.

    Science.gov (United States)

    Cook, Douglas; Julias, Margaret; Nauman, Eric

    2014-04-11

    Biological systems are characterized by high levels of variability, which can affect the results of biomechanical analyses. As a review of this topic, we first surveyed levels of variation in materials relevant to biomechanics, and compared these values to standard engineered materials. As expected, we found significantly higher levels of variation in biological materials. A meta-analysis was then performed based on thorough reviews of 60 research studies from the field of biomechanics to assess the methods and manner in which biological variation is currently handled in our field. The results of our meta-analysis revealed interesting trends in modeling practices, and suggest a need for more biomechanical studies that fully incorporate biological variation in biomechanical models and analyses. Finally, we provide some case study example of how biological variability may provide valuable insights or lead to surprising results. The purpose of this study is to promote the advancement of biomechanics research by encouraging broader treatment of biological variability in biomechanical modeling. Copyright © 2014 Elsevier Ltd. All rights reserved.

  19. Study of Variable Turbulent Prandtl Number Model for Heat Transfer to Supercritical Fluids in Vertical Tubes

    Science.gov (United States)

    Tian, Ran; Dai, Xiaoye; Wang, Dabiao; Shi, Lin

    2018-06-01

    In order to improve the prediction performance of the numerical simulations for heat transfer of supercritical pressure fluids, a variable turbulent Prandtl number (Prt) model for vertical upward flow at supercritical pressures was developed in this study. The effects of Prt on the numerical simulation were analyzed, especially for the heat transfer deterioration conditions. Based on the analyses, the turbulent Prandtl number was modeled as a function of the turbulent viscosity ratio and molecular Prandtl number. The model was evaluated using experimental heat transfer data of CO2, water and Freon. The wall temperatures, including the heat transfer deterioration cases, were more accurately predicted by this model than by traditional numerical calculations with a constant Prt. By analyzing the predicted results with and without the variable Prt model, it was found that the predicted velocity distribution and turbulent mixing characteristics with the variable Prt model are quite different from that predicted by a constant Prt. When heat transfer deterioration occurs, the radial velocity profile deviates from the log-law profile and the restrained turbulent mixing then leads to the deteriorated heat transfer.

  20. Prediction of autoignition in a lifted methane/air flame using an unsteady flamelet/progress variable model

    Energy Technology Data Exchange (ETDEWEB)

    Ihme, Matthias; See, Yee Chee [Department of Aerospace Engineering, University of Michigan, Ann Arbor, MI 48109 (United States)

    2010-10-15

    An unsteady flamelet/progress variable (UFPV) model has been developed for the prediction of autoignition in turbulent lifted flames. The model is a consistent extension to the steady flamelet/progress variable (SFPV) approach, and employs an unsteady flamelet formulation to describe the transient evolution of all thermochemical quantities during the flame ignition process. In this UFPV model, all thermochemical quantities are parameterized by mixture fraction, reaction progress parameter, and stoichiometric scalar dissipation rate, eliminating the explicit dependence on a flamelet time scale. An a priori study is performed to analyze critical modeling assumptions that are associated with the population of the flamelet state space. For application to LES, the UFPV model is combined with a presumed PDF closure to account for subgrid contributions of mixture fraction and reaction progress variable. The model was applied in LES of a lifted methane/air flame. Additional calculations were performed to quantify the interaction between turbulence and chemistry a posteriori. Simulation results obtained from these calculations are compared with experimental data. Compared to the SFPV results, the unsteady flamelet/progress variable model captures the autoignition process, and good agreement with measurements is obtained for mixture fraction, temperature, and species mass fractions. From the analysis of scatter data and mixture fraction-conditional results it is shown that the turbulence/chemistry interaction delays the ignition process towards lower values of scalar dissipation rate, and a significantly larger region in the flamelet state space is occupied during the ignition process. (author)

  1. Incorporation of expert variability into breast cancer treatment recommendation in designing clinical protocol guided fuzzy rule system models.

    Science.gov (United States)

    Garibaldi, Jonathan M; Zhou, Shang-Ming; Wang, Xiao-Ying; John, Robert I; Ellis, Ian O

    2012-06-01

    It has been often demonstrated that clinicians exhibit both inter-expert and intra-expert variability when making difficult decisions. In contrast, the vast majority of computerized models that aim to provide automated support for such decisions do not explicitly recognize or replicate this variability. Furthermore, the perfect consistency of computerized models is often presented as a de facto benefit. In this paper, we describe a novel approach to incorporate variability within a fuzzy inference system using non-stationary fuzzy sets in order to replicate human variability. We apply our approach to a decision problem concerning the recommendation of post-operative breast cancer treatment; specifically, whether or not to administer chemotherapy based on assessment of five clinical variables: NPI (the Nottingham Prognostic Index), estrogen receptor status, vascular invasion, age and lymph node status. In doing so, we explore whether such explicit modeling of variability provides any performance advantage over a more conventional fuzzy approach, when tested on a set of 1310 unselected cases collected over a fourteen year period at the Nottingham University Hospitals NHS Trust, UK. The experimental results show that the standard fuzzy inference system (that does not model variability) achieves overall agreement to clinical practice around 84.6% (95% CI: 84.1-84.9%), while the non-stationary fuzzy model can significantly increase performance to around 88.1% (95% CI: 88.0-88.2%), psystems in any application domain. Copyright © 2012 Elsevier Inc. All rights reserved.

  2. Context Tree Estimation in Variable Length Hidden Markov Models

    OpenAIRE

    Dumont, Thierry

    2011-01-01

    We address the issue of context tree estimation in variable length hidden Markov models. We propose an estimator of the context tree of the hidden Markov process which needs no prior upper bound on the depth of the context tree. We prove that the estimator is strongly consistent. This uses information-theoretic mixture inequalities in the spirit of Finesso and Lorenzo(Consistent estimation of the order for Markov and hidden Markov chains(1990)) and E.Gassiat and S.Boucheron (Optimal error exp...

  3. Evaluation of maillard reaction variables and their effect on heterocyclic amine formation in chemical model systems.

    Science.gov (United States)

    Dennis, Cara; Karim, Faris; Smith, J Scott

    2015-02-01

    Heterocyclic amines (HCAs), highly mutagenic and potentially carcinogenic by-products, form during Maillard browning reactions, specifically in muscle-rich foods. Chemical model systems allow examination of in vitro formation of HCAs while eliminating complex matrices of meat. Limited research has evaluated the effects of Maillard reaction parameters on HCA formation. Therefore, 4 essential Maillard variables (precursors molar concentrations, water amount, sugar type, and sugar amounts) were evaluated to optimize a model system for the study of 4 HCAs: 2-amino-3-methylimidazo-[4,5-f]quinoline, 2-amino-3-methylimidazo[4,5-f]quinoxaline, 2-amino-3,8-dimethylimidazo[4,5-f]quinoxaline, and 2-amino-3,4,8-trimethyl-imidazo[4,5-f]quinoxaline. Model systems were dissolved in diethylene glycol, heated at 175 °C for 40 min, and separated using reversed-phase liquid chromatography. To define the model system, precursor amounts (threonine and creatinine) were adjusted in molar increments (0.2/0.2, 0.4/0.4, 0.6/0.6, and 0.8/0.8 mmol) and water amounts by percentage (0%, 5%, 10%, and 15%). Sugars (lactose, glucose, galactose, and fructose) were evaluated in several molar amounts proportional to threonine and creatinine (quarter, half, equi, and double). The precursor levels and amounts of sugar were significantly different (P < 0.05) in regards to total HCA formation, with 0.6/0.6/1.2 mmol producing higher levels. Water concentration and sugar type also had a significant effect (P < 0.05), with 5% water and lactose producing higher total HCA amounts. A model system containing threonine (0.6 mmol), creatinine (0.6 mmol), and glucose (1.2 mmol), with 15% water was determined to be the optimal model system with glucose and 15% water being a better representation of meat systems. © 2015 Institute of Food Technologists®

  4. Rainfall variability over southern Africa: an overview of current research using satellite and climate model data

    Science.gov (United States)

    Williams, C.; Kniveton, D.; Layberry, R.

    2009-04-01

    It is increasingly accepted that any possible climate change will not only have an influence on mean climate but may also significantly alter climatic variability. A change in the distribution and magnitude of extreme rainfall events (associated with changing variability), such as droughts or flooding, may have a far greater impact on human and natural systems than a changing mean. This issue is of particular importance for environmentally vulnerable regions such as southern Africa. The subcontinent is considered especially vulnerable to and ill-equipped (in terms of adaptation) for extreme events, due to a number of factors including extensive poverty, famine, disease and political instability. Rainfall variability is a function of scale, so high spatial and temporal resolution data are preferred to identify extreme events and accurately predict future variability. In this research, satellite-derived rainfall data are used as a basis for undertaking model experiments using a state-of-the-art climate model, run at both high and low spatial resolution. Once the model's ability to reproduce extremes has been assessed, idealised regions of sea surface temperature (SST) anomalies are used to force the model, with the overall aim of investigating the ways in which SST anomalies influence rainfall extremes over southern Africa. In this paper, a brief overview is given of the authors' research to date, pertaining to southern African rainfall. This covers (i) a description of present-day rainfall variability over southern Africa; (ii) a comparison of model simulated daily rainfall with the satellite-derived dataset; (iii) results from sensitivity testing of the model's domain size; and (iv) results from the idealised SST experiments.

  5. Electromagnetic interference modeling and suppression techniques in variable-frequency drive systems

    Science.gov (United States)

    Yang, Le; Wang, Shuo; Feng, Jianghua

    2017-11-01

    Electromagnetic interference (EMI) causes electromechanical damage to the motors and degrades the reliability of variable-frequency drive (VFD) systems. Unlike fundamental frequency components in motor drive systems, high-frequency EMI noise, coupled with the parasitic parameters of the trough system, are difficult to analyze and reduce. In this article, EMI modeling techniques for different function units in a VFD system, including induction motors, motor bearings, and rectifierinverters, are reviewed and evaluated in terms of applied frequency range, model parameterization, and model accuracy. The EMI models for the motors are categorized based on modeling techniques and model topologies. Motor bearing and shaft models are also reviewed, and techniques that are used to eliminate bearing current are evaluated. Modeling techniques for conventional rectifierinverter systems are also summarized. EMI noise suppression techniques, including passive filter, Wheatstone bridge balance, active filter, and optimized modulation, are reviewed and compared based on the VFD system models.

  6. A Generalized Stability Analysis of the AMOC in Earth System Models: Implication for Decadal Variability and Abrupt Climate Change

    Energy Technology Data Exchange (ETDEWEB)

    Fedorov, Alexey V. [Yale Univ., New Haven, CT (United States)

    2015-01-14

    The central goal of this research project was to understand the mechanisms of decadal and multi-decadal variability of the Atlantic Meridional Overturning Circulation (AMOC) as related to climate variability and abrupt climate change within a hierarchy of climate models ranging from realistic ocean models to comprehensive Earth system models. Generalized Stability Analysis, a method that quantifies the transient and asymptotic growth of perturbations in the system, is one of the main approaches used throughout this project. The topics we have explored range from physical mechanisms that control AMOC variability to the factors that determine AMOC predictability in the Earth system models, to the stability and variability of the AMOC in past climates.

  7. Comprehensive Modeling and Analysis of Rotorcraft Variable Speed Propulsion System With Coupled Engine/Transmission/Rotor Dynamics

    Science.gov (United States)

    DeSmidt, Hans A.; Smith, Edward C.; Bill, Robert C.; Wang, Kon-Well

    2013-01-01

    This project develops comprehensive modeling and simulation tools for analysis of variable rotor speed helicopter propulsion system dynamics. The Comprehensive Variable-Speed Rotorcraft Propulsion Modeling (CVSRPM) tool developed in this research is used to investigate coupled rotor/engine/fuel control/gearbox/shaft/clutch/flight control system dynamic interactions for several variable rotor speed mission scenarios. In this investigation, a prototypical two-speed Dual-Clutch Transmission (DCT) is proposed and designed to achieve 50 percent rotor speed variation. The comprehensive modeling tool developed in this study is utilized to analyze the two-speed shift response of both a conventional single rotor helicopter and a tiltrotor drive system. In the tiltrotor system, both a Parallel Shift Control (PSC) strategy and a Sequential Shift Control (SSC) strategy for constant and variable forward speed mission profiles are analyzed. Under the PSC strategy, selecting clutch shift-rate results in a design tradeoff between transient engine surge margins and clutch frictional power dissipation. In the case of SSC, clutch power dissipation is drastically reduced in exchange for the necessity to disengage one engine at a time which requires a multi-DCT drive system topology. In addition to comprehensive simulations, several sections are dedicated to detailed analysis of driveline subsystem components under variable speed operation. In particular an aeroelastic simulation of a stiff in-plane rotor using nonlinear quasi-steady blade element theory was conducted to investigate variable speed rotor dynamics. It was found that 2/rev and 4/rev flap and lag vibrations were significant during resonance crossings with 4/rev lagwise loads being directly transferred into drive-system torque disturbances. To capture the clutch engagement dynamics, a nonlinear stick-slip clutch torque model is developed. Also, a transient gas-turbine engine model based on first principles mean

  8. Seychelles Dome variability in a high resolution ocean model

    Science.gov (United States)

    Nyadjro, E. S.; Jensen, T.; Richman, J. G.; Shriver, J. F.

    2016-02-01

    The Seychelles-Chagos Thermocline Ridge (SCTR; 5ºS-10ºS, 50ºE-80ºE) in the tropical Southwest Indian Ocean (SWIO) has been recognized as a region of prominence with regards to climate variability in the Indian Ocean. Convective activities in this region have regional consequences as it affect socio-economic livelihood of the people especially in the countries along the Indian Ocean rim. The SCTR is characterized by a quasi-permanent upwelling that is often associated with thermocline shoaling. This upwelling affects sea surface temperature (SST) variability. We present results on the variability and dynamics of the SCTR as simulated by the 1/12º high resolution HYbrid Coordinate Ocean Model (HYCOM). It is observed that locally, wind stress affects SST via Ekman pumping of cooler subsurface waters, mixing and anomalous zonal advection. Remotely, wind stress curl in the eastern equatorial Indian Ocean generates westward-propagating Rossby waves that impacts the depth of the thermocline which in turn impacts SST variability in the SCTR region. The variability of the contributions of these processes, especially with regard to the Indian Ocean Dipole (IOD) are further examined. In a typical positive IOD (PIOD) year, the net vertical velocity in the SCTR is negative year-round as easterlies along the region are intensified leading to a strong positive curl. This vertical velocity is caused mainly by anomalous local Ekman downwelling (with peak during September-November), a direct opposite to the climatology scenario when local Ekman pumping is positive (upwelling favorable) year-round. The anomalous remote contribution to the vertical velocity changes is minimal especially during the developing and peak stages of PIOD events. In a typical negative IOD (NIOD) year, anomalous vertical velocity is positive almost year-round with peaks in May and October. The remote contribution is positive, in contrast to the climatology and most of the PIOD years.

  9. Variability in prostate and seminal vesicle delineations defined on magnetic resonance images, a multi-observer, -center and -sequence study

    DEFF Research Database (Denmark)

    Nyholm, Tufve; Jonsson, Joakim; Söderström, Karin

    2013-01-01

    and approximately equal for the prostate and seminal vesicles. Large differences in variability were observed for individual patients, and also for individual imaging sequences used at the different centers. There was however no indication of decreased variability with higher field strength. CONCLUSION: The overall......BACKGROUND: The use of magnetic resonance (MR) imaging as a part of preparation for radiotherapy is increasing. For delineation of the prostate several publications have shown decreased delineation variability using MR compared to computed tomography (CT). The purpose of the present work....... Two physicians from each center delineated the prostate and the seminal vesicles on each of the 25 image sets. The variability between the delineations was analyzed with respect to overall, intra- and inter-physician variability, and dependence between variability and origin of the MR images, i...

  10. Benchmarking Variable Selection in QSAR.

    Science.gov (United States)

    Eklund, Martin; Norinder, Ulf; Boyer, Scott; Carlsson, Lars

    2012-02-01

    Variable selection is important in QSAR modeling since it can improve model performance and transparency, as well as reduce the computational cost of model fitting and predictions. Which variable selection methods that perform well in QSAR settings is largely unknown. To address this question we, in a total of 1728 benchmarking experiments, rigorously investigated how eight variable selection methods affect the predictive performance and transparency of random forest models fitted to seven QSAR datasets covering different endpoints, descriptors sets, types of response variables, and number of chemical compounds. The results show that univariate variable selection methods are suboptimal and that the number of variables in the benchmarked datasets can be reduced with about 60 % without significant loss in model performance when using multivariate adaptive regression splines MARS and forward selection. Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  11. Diagnostic Value of Selected Echocardiographic Variables to Identify Pulmonary Hypertension in Dogs with Myxomatous Mitral Valve Disease.

    Science.gov (United States)

    Tidholm, A; Höglund, K; Häggström, J; Ljungvall, I

    2015-01-01

    Pulmonary hypertension (PH) is commonly associated with myxomatous mitral valve disease (MMVD). Because dogs with PH present without measureable tricuspid regurgitation (TR), it would be useful to investigate echocardiographic variables that can identify PH. To investigate associations between estimated systolic TR pressure gradient (TRPG) and dog characteristics and selected echocardiographic variables. 156 privately owned dogs. Prospective observational study comparing the estimations of TRPG with dog characteristics and selected echocardiographic variables in dogs with MMVD and measureable TR. Tricuspid regurgitation pressure gradient was significantly (P modeled as linear variables LA/Ao (P modeled as second order polynomial variables: AT/DT (P = .0039) and LVIDDn (P value for the final model was 0.45 and receiver operating characteristic curve analysis suggested the model's performance to predict PH, defined as 36, 45, and 55 mmHg as fair (area under the curve [AUC] = 0.80), good (AUC = 0.86), and excellent (AUC = 0.92), respectively. In dogs with MMVD, the presence of PH might be suspected with the combination of decreased PA AT/DT, increased RVIDDn and LA/Ao, and a small or great LVIDDn. Copyright © 2015 The Authors Journal of Veterinary Internal Medicine published by Wiley Periodicals, Inc. on behalf of the American College of Veterinary Internal Medicine.

  12. Stable Graphical Model Estimation with Random Forests for Discrete, Continuous, and Mixed Variables

    OpenAIRE

    Fellinghauer, Bernd; Bühlmann, Peter; Ryffel, Martin; von Rhein, Michael; Reinhardt, Jan D.

    2011-01-01

    A conditional independence graph is a concise representation of pairwise conditional independence among many variables. Graphical Random Forests (GRaFo) are a novel method for estimating pairwise conditional independence relationships among mixed-type, i.e. continuous and discrete, variables. The number of edges is a tuning parameter in any graphical model estimator and there is no obvious number that constitutes a good choice. Stability Selection helps choosing this parameter with respect to...

  13. Cross-national validation of prognostic models predicting sickness absence and the added value of work environment variables.

    Science.gov (United States)

    Roelen, Corné A M; Stapelfeldt, Christina M; Heymans, Martijn W; van Rhenen, Willem; Labriola, Merete; Nielsen, Claus V; Bültmann, Ute; Jensen, Chris

    2015-06-01

    To validate Dutch prognostic models including age, self-rated health and prior sickness absence (SA) for ability to predict high SA in Danish eldercare. The added value of work environment variables to the models' risk discrimination was also investigated. 2,562 municipal eldercare workers (95% women) participated in the Working in Eldercare Survey. Predictor variables were measured by questionnaire at baseline in 2005. Prognostic models were validated for predictions of high (≥30) SA days and high (≥3) SA episodes retrieved from employer records during 1-year follow-up. The accuracy of predictions was assessed by calibration graphs and the ability of the models to discriminate between high- and low-risk workers was investigated by ROC-analysis. The added value of work environment variables was measured with Integrated Discrimination Improvement (IDI). 1,930 workers had complete data for analysis. The models underestimated the risk of high SA in eldercare workers and the SA episodes model had to be re-calibrated to the Danish data. Discrimination was practically useful for the re-calibrated SA episodes model, but not the SA days model. Physical workload improved the SA days model (IDI = 0.40; 95% CI 0.19-0.60) and psychosocial work factors, particularly the quality of leadership (IDI = 0.70; 95% CI 053-0.86) improved the SA episodes model. The prognostic model predicting high SA days showed poor performance even after physical workload was added. The prognostic model predicting high SA episodes could be used to identify high-risk workers, especially when psychosocial work factors are added as predictor variables.

  14. Can Ambulatory Blood Pressure Variability Contribute to Individual Cardiovascular Risk Stratification?

    Directory of Open Access Journals (Sweden)

    Annamária Magdás

    2016-01-01

    Full Text Available Objective. The aim of this study is to define the normal range for average real variability (ARV and to establish whether it can be considered as an additional cardiovascular risk factor. Methods. In this observational study, 110 treated hypertensive patients were included and admitted for antihypertensive treatment adjustment. Circadian blood pressure was recorded with validated devices. Blood pressure variability (BPV was assessed according to the ARV definition. Based on their variability, patients were classified into low, medium, and high variability groups using the fuzzy c-means algorithm. To assess cardiovascular risk, blood samples were collected. Characteristics of the groups were compared by ANOVA tests. Results. Low variability was defined as ARV below 9.8 mmHg (32 patients, medium as 9.8–12.8 mmHg (48 patients, and high variability above 12.8 mmHg (30 patients. Mean systolic blood pressure was 131.2 ± 16.7, 135.0 ± 12.1, and 141.5 ± 11.4 mmHg in the low, medium, and high variability groups, respectively (p=0.0113. Glomerular filtration rate was 78.6 ± 29.3, 74.8 ± 26.4, and 62.7±23.2 mL/min/1.73 m2 in the low, medium, and high variability groups, respectively (p=0.0261. Conclusion. Increased values of average real variability represent an additional cardiovascular risk factor. Therefore, reducing BP variability might be as important as achieving optimal BP levels, but there is need for further studies to define a widely acceptable threshold value.

  15. Neural correlates of gait variability in people with multiple sclerosis with fall history.

    Science.gov (United States)

    Kalron, Alon; Allali, Gilles; Achiron, Anat

    2018-05-28

    Investigate the association between step time variability and related brain structures in accordance with fall status in people with multiple sclerosis (PwMS). The study included 225 PwMS. A whole-brain MRI was performed by a high-resolution 3.0-Telsa MR scanner in addition to volumetric analysis based on 3D T1-weighted images using the FreeSurfer image analysis suite. Step time variability was measured by an electronic walkway. Participants were defined as "fallers" (at least two falls during the previous year) and "non-fallers". One hundred and five PwMS were defined as fallers and had a greater step time variability compared to non-fallers (5.6% (S.D.=3.4) vs. 3.4% (S.D.=1.5); p=0.001). MS fallers exhibited a reduced volume in the left caudate and both cerebellum hemispheres compared to non-fallers. By using a linear regression analysis no association was found between gait variability and related brain structures in the total cohort and non-fallers group. However, the analysis found an association between the left hippocampus and left putamen volumes with step time variability in the faller group; p=0.031, 0.048, respectively, controlling for total cranial volume, walking speed, disability, age and gender. Nevertheless, according to the hierarchical regression model, the contribution of these brain measures to predict gait variability was relatively small compared to walking speed. An association between low left hippocampal, putamen volumes and step time variability was found in PwMS with a history of falls, suggesting brain structural characteristics may be related to falls and increased gait variability in PwMS. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  16. Quantitative Analysis of the Security of Software-Defined Network Controller Using Threat/Effort Model

    Directory of Open Access Journals (Sweden)

    Zehui Wu

    2017-01-01

    Full Text Available SDN-based controller, which is responsible for the configuration and management of the network, is the core of Software-Defined Networks. Current methods, which focus on the secure mechanism, use qualitative analysis to estimate the security of controllers, leading to inaccurate results frequently. In this paper, we employ a quantitative approach to overcome the above shortage. Under the analysis of the controller threat model we give the formal model results of the APIs, the protocol interfaces, and the data items of controller and further provide our Threat/Effort quantitative calculation model. With the help of Threat/Effort model, we are able to compare not only the security of different versions of the same kind controller but also different kinds of controllers and provide a basis for controller selection and secure development. We evaluated our approach in four widely used SDN-based controllers which are POX, OpenDaylight, Floodlight, and Ryu. The test, which shows the similarity outcomes with the traditional qualitative analysis, demonstrates that with our approach we are able to get the specific security values of different controllers and presents more accurate results.

  17. Malware Propagation and Prevention Model for Time-Varying Community Networks within Software Defined Networks

    Directory of Open Access Journals (Sweden)

    Lan Liu

    2017-01-01

    Full Text Available As the adoption of Software Defined Networks (SDNs grows, the security of SDN still has several unaddressed limitations. A key network security research area is in the study of malware propagation across the SDN-enabled networks. To analyze the spreading processes of network malware (e.g., viruses in SDN, we propose a dynamic model with a time-varying community network, inspired by research models on the spread of epidemics in complex networks across communities. We assume subnets of the network as communities and links that are dense in subnets but sparse between subnets. Using numerical simulation and theoretical analysis, we find that the efficiency of network malware propagation in this model depends on the mobility rate q of the nodes between subnets. We also find that there exists a mobility rate threshold qc. The network malware will spread in the SDN when the mobility rate q>qc. The malware will survive when q>qc and perish when qmodel is effective, and the results may help to decide the SDN control strategy to defend against network malware and provide a theoretical basis to reduce and prevent network security incidents.

  18. Modeling variability in porescale multiphase flow experiments

    Science.gov (United States)

    Ling, Bowen; Bao, Jie; Oostrom, Mart; Battiato, Ilenia; Tartakovsky, Alexandre M.

    2017-07-01

    Microfluidic devices and porescale numerical models are commonly used to study multiphase flow in biological, geological, and engineered porous materials. In this work, we perform a set of drainage and imbibition experiments in six identical microfluidic cells to study the reproducibility of multiphase flow experiments. We observe significant variations in the experimental results, which are smaller during the drainage stage and larger during the imbibition stage. We demonstrate that these variations are due to sub-porescale geometry differences in microcells (because of manufacturing defects) and variations in the boundary condition (i.e., fluctuations in the injection rate inherent to syringe pumps). Computational simulations are conducted using commercial software STAR-CCM+, both with constant and randomly varying injection rates. Stochastic simulations are able to capture variability in the experiments associated with the varying pump injection rate.

  19. Variable trajectory model for regional assessments of air pollution from sulfur compounds.

    Energy Technology Data Exchange (ETDEWEB)

    Powell, D.C.; McNaughton, D.J.; Wendell, L.L.; Drake, R.L.

    1979-02-01

    This report describes a sulfur oxides atmospheric pollution model that calculates trajectories using single-layer historical wind data as well as chemical transformation and deposition following discrete contaminant air masses. Vertical diffusion under constraints is calculated, but all horizontal dispersion is a funcion of trajectory variation. The ground-level air concentrations and deposition are calculated in a rectangular area comprising the northeastern United States and southeastern Canada. Calculations for a 29-day assessment period in April 1974 are presented along with a limited verification. Results for the studies were calculated using a source inventory comprising 61% of the anthropogenic SO/sub 2/ emissions. Using current model parameterization levels, predicted concentration values are most sensitive to variations in dry deposition of SO/sub 2/, wet deposition of sulfate, and transformation of SO/sub 2/ to sulfate. Replacing the variable mixed-layer depth and variable stability features of the model with constant definitions of each results in increased ground-level concentration predicions for SO/sub 2/ and particularly for sulfate.

  20. Climate simulations for 1880-2003 with GISS modelE

    International Nuclear Information System (INIS)

    Hansen, J.; Lacis, A.; Miller, R.; Schmidt, G.A.; Russell, G.; Canuto, V.; Del Genio, A.; Hall, T.; Hansen, J.; Sato, M.; Kharecha, P.; Nazarenko, L.; Aleinov, I.; Bauer, S.; Chandler, M.; Faluvegi, G.; Jonas, J.; Ruedy, R.; Lo, K.; Cheng, Y.; Lacis, A.; Schmidt, G.A.; Del Genio, A.; Miller, R.; Cairns, B.; Hall, T.; Baum, E.; Cohen, A.; Fleming, E.; Jackman, C.; Friend, A.; Kelley, M.

    2007-01-01

    We carry out climate simulations for 1880-2003 with GISS modelE driven by ten measured or estimated climate forcing. An ensemble of climate model runs is carried out for each forcing acting individually and for all forcing mechanisms acting together. We compare side-by-side simulated climate change for each forcing, all forcing, observations, unforced variability among model ensemble members, and, if available, observed variability. Discrepancies between observations and simulations with all forcing are due to model deficiencies, inaccurate or incomplete forcing, and imperfect observations. Although there are notable discrepancies between model and observations, the fidelity is sufficient to encourage use of the model for simulations of future climate change. By using a fixed well-documented model and accurately defining the 1880-2003 forcing, we aim to provide a benchmark against which the effect of improvements in the model, climate forcing, and observations can be tested. Principal model deficiencies include unrealistic weak tropical El Nino-like variability and a poor distribution of sea ice, with too much sea ice in the Northern Hemisphere and too little in the Southern Hemisphere. Greatest uncertainties in the forcing are the temporal and spatial variations of anthropogenic aerosols and their indirect effects on clouds. (authors)

  1. A synapse memristor model with forgetting effect

    International Nuclear Information System (INIS)

    Chen, Ling; Li, Chuandong; Huang, Tingwen; Chen, Yiran; Wen, Shiping; Qi, Jiangtao

    2013-01-01

    In this Letter we improved the ion diffusion term proposed in literature and redesigned the previous model as a dynamical model with two more internal state variables ‘forgetting rate’ and ‘retention’ besides the original variable ‘conductance’. The new model can not only describe the basic memory ability of memristor but also be able to capture the new finding forgetting behavior in memristor. And different from the previous model, the transition from short term memory to long term memory is also defined by the new model. Besides, the new model is better matched with the physical memristor (Pd/WOx/W) than the previous one.

  2. [Correlation coefficient-based classification method of hydrological dependence variability: With auto-regression model as example].

    Science.gov (United States)

    Zhao, Yu Xi; Xie, Ping; Sang, Yan Fang; Wu, Zi Yi

    2018-04-01

    Hydrological process evaluation is temporal dependent. Hydrological time series including dependence components do not meet the data consistency assumption for hydrological computation. Both of those factors cause great difficulty for water researches. Given the existence of hydrological dependence variability, we proposed a correlationcoefficient-based method for significance evaluation of hydrological dependence based on auto-regression model. By calculating the correlation coefficient between the original series and its dependence component and selecting reasonable thresholds of correlation coefficient, this method divided significance degree of dependence into no variability, weak variability, mid variability, strong variability, and drastic variability. By deducing the relationship between correlation coefficient and auto-correlation coefficient in each order of series, we found that the correlation coefficient was mainly determined by the magnitude of auto-correlation coefficient from the 1 order to p order, which clarified the theoretical basis of this method. With the first-order and second-order auto-regression models as examples, the reasonability of the deduced formula was verified through Monte-Carlo experiments to classify the relationship between correlation coefficient and auto-correlation coefficient. This method was used to analyze three observed hydrological time series. The results indicated the coexistence of stochastic and dependence characteristics in hydrological process.

  3. Dynamic modelling and analysis of a wind turbine with variable speed

    NARCIS (Netherlands)

    Steinbuch, M.

    1986-01-01

    On behalf of the operation of the Dutch National Wind Farm, which is under construction now, a study is being performed on the control system design of variable speed wind turbines. To realize this a non-linear dynamic model of a wind turbine with synchronous generator and AC/ DC/AC conversion has

  4. Electrical Activity in a Time-Delay Four-Variable Neuron Model under Electromagnetic Induction

    Directory of Open Access Journals (Sweden)

    Keming Tang

    2017-11-01

    Full Text Available To investigate the effect of electromagnetic induction on the electrical activity of neuron, the variable for magnetic flow is used to improve Hindmarsh–Rose neuron model. Simultaneously, due to the existence of time-delay when signals are propagated between neurons or even in one neuron, it is important to study the role of time-delay in regulating the electrical activity of the neuron. For this end, a four-variable neuron model is proposed to investigate the effects of electromagnetic induction and time-delay. Simulation results suggest that the proposed neuron model can show multiple modes of electrical activity, which is dependent on the time-delay and external forcing current. It means that suitable discharge mode can be obtained by selecting the time-delay or external forcing current, which could be helpful for further investigation of electromagnetic radiation on biological neuronal system.

  5. On the intra-seasonal variability within the extratropics in a general circulation model and observational data

    International Nuclear Information System (INIS)

    May, W.; Bengtsson, L.

    1994-01-01

    There are various phenomena on different spatial and temporal scales contributing to the intra-seasonal variability within the extratropics. One may notice higher-frequency baroclinic disturbances affecting the day-to-day variability of the atmosphere. But one finds also low-frequency fluctuations on a typical time scale of a few weeks. Blocking anticyclones are probably the most prominent example of such features. These fluctuations on different scales, however, are influencing each other, in particular the temporal evolution and spatial distribution. There has been observational work on various phenomena contributing to the intra-seasonal variability for a long time. In the last decade or so, however, with the increasing importance of General Circulation Models there have been some studies dealing with the intra-seasonal variability as simulated by these models

  6. Utilising artificial intelligence in software defined wireless sensor network

    CSIR Research Space (South Africa)

    Matlou, OG

    2017-10-01

    Full Text Available Software Defined Wireless Sensor Network (SDWSN) is realised by infusing Software Defined Network (SDN) model in Wireless Sensor Network (WSN), Reason for that is to overcome the challenges of WSN. Artificial Intelligence (AI) and machine learning...

  7. Risk methodology for geologic disposal of radioactive waste: asymptotic properties of the environmental transport model

    International Nuclear Information System (INIS)

    Helton, J.C.; Brown, J.B.; Iman, R.L.

    1981-02-01

    The Environmental Transport Model is a compartmental model developed to represent the surface movement of radionuclides. The purpose of the present study is to investigate the asymptotic behavior of the model and to acquire insight with respect to such behavior and the variables which influence it. For four variations of a hypothetical river receiving a radionuclide discharge, the following properties are considered: predicted asymptotic values for environmental radionuclide concentrations and time required for environmental radionuclide concentrations to reach 90% of their predicted asymptotic values. Independent variables of two types are used to define each variation of the river: variables which define physical properties of the river system (e.g., soil depth, river discharge and sediment resuspension) and variables which summarize radionuclide properties (i.e., distribution coefficients). Sensitivity analysis techniques based on stepwise regression are used to determine the dominant variables influencing the behavior of the model. This work constitutes part of a project at Sandia National Laboratories funded by the Nuclear Regulatory Commission to develop a methodology to assess the risk associated with geologic disposal of radioactive waste

  8. White dwarf models of supernovae and cataclysmic variables

    International Nuclear Information System (INIS)

    Nomoto, K.; Hashimoto, M.

    1986-01-01

    If the accreting white dwarf increases its mass to the Chandrasekhar mass, it will either explode as a Type I supernova or collapse to form a neutron star. In fact, there is a good agreement between the exploding white dwarf model for Type I supernovae and observations. We describe various types of evolution of accreting white dwarfs as a function of binary parameters (i.e,. composition, mass, and age of the white dwarf, its companion star, and mass accretion rate), and discuss the conditions for the precursors of exploding or collapsing white dwarfs, and their relevance to cataclysmic variables. Particular attention is given to helium star cataclysmics which might be the precursors of some Type I supernovae or ultrashort period x-ray binaries. Finally we present new evolutionary calculations using the updated nuclear reaction rates for the formation of O+Ne+Mg white dwarfs, and discuss the composition structure and their relevance to the model for neon novae. 61 refs., 14 figs

  9. Modeling and Design Optimization of Variable-Speed Wind Turbine Systems

    Directory of Open Access Journals (Sweden)

    Ulas Eminoglu

    2014-01-01

    Full Text Available As a result of the increase in energy demand and government subsidies, the usage of wind turbine system (WTS has increased dramatically. Due to the higher energy production of a variable-speed WTS as compared to a fixed-speed WTS, the demand for this type of WTS has increased. In this study, a new method for the calculation of the power output of variable-speed WTSs is proposed. The proposed model is developed from the S-type curve used for population growth, and is only a function of the rated power and rated (nominal wind speed. It has the advantage of enabling the user to calculate power output without using the rotor power coefficient. Additionally, by using the developed model, a mathematical method to calculate the value of rated wind speed in terms of turbine capacity factor and the scale parameter of the Weibull distribution for a given wind site is also proposed. Design optimization studies are performed by using the particle swarm optimization (PSO and artificial bee colony (ABC algorithms, which are applied into this type of problem for the first time. Different sites such as Northern and Mediterranean sites of Europe have been studied. Analyses for various parameters are also presented in order to evaluate the effect of rated wind speed on the design parameters and produced energy cost. Results show that proposed models are reliable and very useful for modeling and optimization of WTSs design by taking into account the wind potential of the region. Results also show that the PSO algorithm has better performance than the ABC algorithm for this type of problem.

  10. Generalized Additive Models for Location Scale and Shape (GAMLSS) in R

    OpenAIRE

    D. Mikis Stasinopoulos; Robert A. Rigby

    2007-01-01

    GAMLSS is a general framework for fitting regression type models where the distribution of the response variable does not have to belong to the exponential family and includes highly skew and kurtotic continuous and discrete distribution. GAMLSS allows all the parameters of the distribution of the response variable to be modelled as linear/non-linear or smooth functions of the explanatory variables. This paper starts by defining the statistical framework of GAMLSS, then describes the curren...

  11. About hidden influence of predictor variables: Suppressor and mediator variables

    Directory of Open Access Journals (Sweden)

    Milovanović Boško

    2013-01-01

    Full Text Available In this paper procedure for researching hidden influence of predictor variables in regression models and depicting suppressor variables and mediator variables is shown. It is also shown that detection of suppressor variables and mediator variables could provide refined information about the research problem. As an example for applying this procedure, relation between Atlantic atmospheric centers and air temperature and precipitation amount in Serbia is chosen. [Projekat Ministarstva nauke Republike Srbije, br. 47007

  12. ltm: An R Package for Latent Variable Modeling and Item Response Analysis

    Directory of Open Access Journals (Sweden)

    Dimitris Rizopoulos

    2006-11-01

    Full Text Available The R package ltm has been developed for the analysis of multivariate dichotomous and polytomous data using latent variable models, under the Item Response Theory approach. For dichotomous data the Rasch, the Two-Parameter Logistic, and Birnbaum's Three-Parameter models have been implemented, whereas for polytomous data Semejima's Graded Response model is available. Parameter estimates are obtained under marginal maximum likelihood using the Gauss-Hermite quadrature rule. The capabilities and features of the package are illustrated using two real data examples.

  13. Effects of short-term variability of meteorological variables on soil temperature in permafrost regions

    Science.gov (United States)

    Beer, Christian; Porada, Philipp; Ekici, Altug; Brakebusch, Matthias

    2018-03-01

    Effects of the short-term temporal variability of meteorological variables on soil temperature in northern high-latitude regions have been investigated. For this, a process-oriented land surface model has been driven using an artificially manipulated climate dataset. Short-term climate variability mainly impacts snow depth, and the thermal diffusivity of lichens and bryophytes. These impacts of climate variability on insulating surface layers together substantially alter the heat exchange between atmosphere and soil. As a result, soil temperature is 0.1 to 0.8 °C higher when climate variability is reduced. Earth system models project warming of the Arctic region but also increasing variability of meteorological variables and more often extreme meteorological events. Therefore, our results show that projected future increases in permafrost temperature and active-layer thickness in response to climate change will be lower (i) when taking into account future changes in short-term variability of meteorological variables and (ii) when representing dynamic snow and lichen and bryophyte functions in land surface models.

  14. Self-rated driving habits among older adults with clinically-defined mild cognitive impairment, clinically-defined dementia, and normal cognition.

    Science.gov (United States)

    O'Connor, Melissa L; Edwards, Jerri D; Bannon, Yvonne

    2013-12-01

    Older adults with clinically-defined dementia may report reducing their driving more than cognitively normal controls. However, it is unclear how these groups compare to individuals with clinically-defined mild cognitive impairment (MCI) in terms of driving behaviors. The current study investigated self-reported driving habits among adults age 60 and older with clinical MCI (n=41), clinical mild dementia (n=40), and normal cognition (n=43). Participants reported their driving status, driving frequency (days per week), and how often they avoided accessing the community, making left turns, driving at night, driving in unfamiliar areas, driving on high-traffic roads, and driving in bad weather. After adjusting for education, a MANCOVA revealed that participants with MCI and dementia avoided unfamiliar areas and high-traffic roads significantly more than normal participants. Participants with dementia also avoided left turns and accessing the community more than those with normal cognition and MCI (pdriving variables did not significantly differ between groups. Thus, older adults with clinically-defined MCI, as well as those with dementia, avoided some complex driving situations more than cognitively intact adults. However, all diagnostic groups had similar rates of driving cessation and frequency. Future research should examine the safety implications of such findings. Copyright © 2013 Elsevier Ltd. All rights reserved.

  15. Valuation and Hedging of Variable Annuities in Pension Schemes

    NARCIS (Netherlands)

    Bovenberg, A.L.; van Bilsen, S.; Laeven, R.J.A.

    2018-01-01

    This paper explores defined ambition pension schemes that provide (deferred) variable annuities. These pension schemes allocate various risks (i.e., real interest rate, expected inflation and stock market risk) to the policyholders on the basis of complete contracts. We show how these variable

  16. Application of latent variable model in Rosenberg self-esteem scale.

    Science.gov (United States)

    Leung, Shing-On; Wu, Hui-Ping

    2013-01-01

    Latent Variable Models (LVM) are applied to Rosenberg Self-Esteem Scale (RSES). Parameter estimations automatically give negative signs hence no recoding is necessary for negatively scored items. Bad items can be located through parameter estimate, item characteristic curves and other measures. Two factors are extracted with one on self-esteem and the other on the degree to take moderate views, with the later not often being covered in previous studies. A goodness-of-fit measure based on two-way margins is used but more works are needed. Results show that scaling provided by models with more formal statistical ground correlated highly with conventional method, which may provide justification for usual practice.

  17. An Investigation into the Relationship among Psychiatric, Demographic and Socio-Economic Variables with Bayesian Network Modeling

    Directory of Open Access Journals (Sweden)

    Gunal Bilek

    2018-03-01

    Full Text Available The aim of this paper is to investigate the factors influencing the Beck Depression Inventory score, the Beck Hopelessness Scale score and the Rosenberg Self-Esteem score and the relationships among the psychiatric, demographic and socio-economic variables with Bayesian network modeling. The data of 823 university students consist of 21 continuous and discrete relevant psychiatric, demographic and socio-economic variables. After the discretization of the continuous variables by two approaches, two Bayesian networks models are constructed using the b n l e a r n package in R, and the results are presented via figures and probabilities. One of the most significant results is that in the first Bayesian network model, the gender of the students influences the level of depression, with female students being more depressive. In the second model, social activity directly influences the level of depression. In each model, depression influences both the level of hopelessness and self-esteem in students; additionally, as the level of depression increases, the level of hopelessness increases, but the level of self-esteem drops.

  18. Human activity and climate variability project: annual report 2001

    International Nuclear Information System (INIS)

    Harle, K.J.; Heijnis, H.; Henderson-Sellers, A.; Sharmeen, S.; Zahorowski, W.

    2002-01-01

    Knowledge of the state of the Australian environment, including natural climate variability, prior to colonial settlement is vital if we are to define and understand the impact of over two hundred years of post-industrial human activity on our landscape. ANSTO, in conjunction with university partners, is leading a major research effort to provide natural archives of human activity and climate variability over the last 500 years in Australia, utilising a variety of techniques, including lead-210 and radiocarbon dating and analyses of proxy indicators (such as microfossils) as well as direct evidence (such as trace elements) of human activity and climate variability. The other major project objectives were to contribute to the understanding of the impact of human induced and natural aerosols in the East Asian region on climate through analysis and sourcing of fine particles and characterisation of air samples using radon concentrations and to contribute to the improvement of land surface parameterisation schemes and investigate the potential to use stable isotopes to improve global climate models and thus improve our understanding of future climate

  19. A business planning model to identify new safety net clinic locations.

    Science.gov (United States)

    Langabeer, James; Helton, Jeffrey; DelliFraine, Jami; Dotson, Ebbin; Watts, Carolyn; Love, Karen

    2014-01-01

    Community health clinics serving the poor and underserved are geographically expanding due to changes in U.S. health care policy. This paper describes the experience of a collaborative alliance of health care providers in a large metropolitan area who develop a conceptual and mathematical decision model to guide decisions on expanding its network of community health clinics. Community stakeholders participated in a collaborative process that defined constructs they deemed important in guiding decisions on the location of community health clinics. This collaboration also defined key variables within each construct. Scores for variables within each construct were then totaled and weighted into a community-specific optimal space planning equation. This analysis relied entirely on secondary data available from published sources. The model built from this collaboration revolved around the constructs of demand, sustainability, and competition. It used publicly available data defining variables within each construct to arrive at an optimal location that maximized demand and sustainability and minimized competition. This is a model that safety net clinic planners and community stakeholders can use to analyze demographic and utilization data to optimize capacity expansion to serve uninsured and Medicaid populations. Communities can use this innovative model to develop a locally relevant clinic location-planning framework.

  20. Introducing and modeling inefficiency contributions

    DEFF Research Database (Denmark)

    Asmild, Mette; Kronborg, Dorte; Matthews, Kent

    2016-01-01

    -called inefficiency contributions, which are defined as the relative contributions from specific variables to the overall levels of inefficiencies. A statistical model for distinguishing the inefficiency contributions between subgroups is proposed and the method is illustrated on a data set on Chinese banks....