WorldWideScience

Sample records for models variables defined

  1. Defining a Family of Cognitive Diagnosis Models Using Log-Linear Models with Latent Variables

    Science.gov (United States)

    Henson, Robert A.; Templin, Jonathan L.; Willse, John T.

    2009-01-01

    This paper uses log-linear models with latent variables (Hagenaars, in "Loglinear Models with Latent Variables," 1993) to define a family of cognitive diagnosis models. In doing so, the relationship between many common models is explicitly defined and discussed. In addition, because the log-linear model with latent variables is a general model for…

  2. Mathematical Model Defining Volumetric Losses of Hydraulic Oil Compression in a Variable Capacity Displacement Pump

    Directory of Open Access Journals (Sweden)

    Paszota Zygmunt

    2015-01-01

    Full Text Available The objective of the work is to develop the capability of evaluating the volumetric losses of hydraulic oil compression in the working chambers of high pressure variable capacity displacement pump. Volumetric losses of oil compression must be determined as functions of the same parameters, which the volumetric losses due to leakage, resulting from the quality of design solution of the pump, are evaluated as dependent on and also as function of the oil aeration coefficient Ɛ. A mathematical model has been developed describing the hydraulic oil compressibility coefficient klc|Δppi;Ɛ;v as a relation to the ratio ΔpPi/pn of indicated increase ΔpPi of pressure in the working chambers and the nominal pressure pn, to the pump capacity coefficient bP, to the oil aeration coefficient  and to the ratio v/vnof oil viscosity v and reference viscosity vn. A mathematical model is presented of volumetric losses qpvc|ΔpPi;bp;;vof hydraulic oil compression in the pump working chambers in the form allowing to use it in the model of power of losses and energy efficiency

  3. The EUROclass trial: Defining subgroups in common variable immunodeficiency

    NARCIS (Netherlands)

    C. Wehr (Claudia); T. Kivioja (Teemu); C. Schmitt (Christian); B. Ferry (Berne); T. Witte (Torsten); E. Eren (Efrem); M. Vlkova (Marcela); M. Hernandez (Manuel); D. Detkova (Drahomira); P.R. Bos (Philip); G. Poerksen (Gonke); H. von Bernuth (Horst); U. Baumann (Ulrich); S. Goldacker (Sigune); S. Gutenberger (Sylvia); M. Schlesier (Michael); F. Bergeron-Van der Cruyssen (Florence); M. Le Garff (Magali); P. Debré (Patrice); R. Jacobs (Roland); J. Jones (John); E. Bateman (Elizabeth); J. Litzman (Jiri); P.M. van Hagen (Martin); A. Plebani (Alessandro); R. Schmidt (Reinhold); V. Thon (Vojtech); I. Quinti (Isabella); T. Espanol (Teresa); A.D. Webster (David); H. Chapel (Helen); M. Vihinen (Mauno); E. Oksenhendler (Eric); H.H. Peter; K. Warnatz (Klaus)

    2008-01-01

    textabstractThe heterogeneity of common variable immunodeficiency (CVID) calls for a classification addressing pathogenic mechanisms as well as clinical relevance. This European multicenter trial was initiated to develop a consensus of 2 existing classification schemes based on flowcytometric B-cell

  4. A Core Language for Separate Variability Modeling

    DEFF Research Database (Denmark)

    Iosif-Lazăr, Alexandru Florin; Wasowski, Andrzej; Schaefer, Ina

    2014-01-01

    Separate variability modeling adds variability to a modeling language without requiring modifications of the language or the supporting tools. We define a core language for separate variability modeling using a single kind of variation point to define transformations of software artifacts in object...... hierarchical dependencies between variation points via copying and flattening. Thus, we reduce a model with intricate dependencies to a flat executable model transformation consisting of simple unconditional local variation points. The core semantics is extremely concise: it boils down to two operational rules...

  5. Defined solid-angle counter with variable geometry

    Energy Technology Data Exchange (ETDEWEB)

    Garcia-Torano, E. [Laboratorio de Metrologia de Radiaciones Ionizantes, CIEMAT, Avda. Complutense 22, 28040 Madrid (Spain)], E-mail: e.garciatorano@ciemat.es; Duran Ramiro, T. [Laboratorio de Metrologia de Radiaciones Ionizantes, CIEMAT, Avda. Complutense 22, 28040 Madrid (Spain); Burgos, C. [Division de Infraestrutura General Tecnica, CIEMAT, Madrid (Spain); Begona Ahedo, M. [Unidad de Ingenieria y Obras, CIEMAT, Madrid (Spain)

    2008-06-15

    We describe a defined solid-angle counter for the standardization of radioactive sources of alpha-particle emitters. It has been built with the aim of combining good counting efficiencies, low uncertainties and flexibility of operation. The distance between source and detector can be changed in a continuous way with a precision guide and a ball screw from 8 to 19 cm, which correspond to counting efficiencies between 0.023 and 0.004 for small size sources. A linear encoder allows the accurate determination of the source position. Alpha spectra of the sources are measured with an implanted silicon detector with an active area of 2000 mm{sup 2}. Uncertainties, excluding counting statistics, are below 0.1%.

  6. Individual Signatures Define Canine Skin Microbiota Composition and Variability

    Science.gov (United States)

    Cuscó, Anna; Sánchez, Armand; Altet, Laura; Ferrer, Lluís; Francino, Olga

    2017-01-01

    Dogs present almost all their skin sites covered by hair, but canine skin disorders are more common in certain skin sites and breeds. The goal of our study is to characterize the composition and variability of the skin microbiota in healthy dogs and to evaluate the effect of the breed, the skin site, and the individual. We have analyzed eight skin sites of nine healthy dogs from three different breeds by massive sequencing of 16S rRNA gene V1–V2 hypervariable regions. The main phyla inhabiting the skin microbiota in healthy dogs are Proteobacteria, Firmicutes, Fusobacteria, Actinobacteria, and Bacteroidetes. Our results suggest that skin microbiota composition pattern is individual specific, with some dogs presenting an even representation of the main phyla and other dogs with only a major phylum. The individual is the main force driving skin microbiota composition and diversity rather than the skin site or the breed. The individual is explaining 45% of the distances among samples, whereas skin site explains 19% and breed 9%. Moreover, analysis of similarities suggests a strong dissimilarity among individuals (R = 0.79, P = 0.001) that is mainly explained by low-abundant species in each dog. Skin site also plays a role: inner pinna presents the highest diversity value, whereas perianal region presents the lowest one and the most differentiated microbiota composition. PMID:28220148

  7. Modeling Pacific Decadal Variability

    Science.gov (United States)

    Schneider, N.

    2002-05-01

    Hypotheses for decadal variability rely on the large thermal inertia of the ocean to sequester heat and provide the long memory of the climate system. Understanding decadal variability requires the study of the generation of ocean anomalies at decadal frequencies, the evolution of oceanic signals, and the response of the atmosphere to oceanic perturbations. A sample of studies relevant for Pacific decadal variability will be reviewed in this presentation. The ocean integrates air-sea flux anomalies that result from internal atmospheric variability or broad-band coupled processes such as ENSO, or are an intrinsic part of the decadal feedback loop. Anomalies of Ekman pumping lead to deflections of the ocean thermocline and accompanying changes of the ocean circulation; perturbations of surface layer heat and fresh water budgets cause anomalies of T/S characteristics of water masses. The former process leads to decadal variability due to the dynamical adjustment of the mid latitude gyres or thermocline circulation; the latter accounts for the low frequency climate variations by the slow propagation of anomalies in the thermocline from the mid-latitude outcrops to the equatorial upwelling regions. Coupled modeling studies and ocean model hindcasts suggest that the adjustment of the North Pacific gyres to variation of Ekman pumping causes low frequency variations of surface temperature in the Kuroshio-Oyashio extension region. These changes appear predictable a few years in advance, and affect the local upper ocean heat budget and precipitation. The majority of low frequency variance is explained by the ocean's response to stochastic atmospheric forcing, the additional variance explained by mid-latitude ocean to atmosphere feedbacks appears to be small. The coupling of subtropical and tropical regions by the equator-ward motion in the thermocline can support decadal anomalies by changes of its speed and path, or by transporting water mass anomalies to the equatorial

  8. Modeling Shared Variables in VHDL

    DEFF Research Database (Denmark)

    Madsen, Jan; Brage, Jens P.

    1994-01-01

    A set of concurrent processes communicating through shared variables is an often used model for hardware systems. This paper presents three modeling techniques for representing such shared variables in VHDL, depending on the acceptable constraints on accesses to the variables. Also a set of guide......A set of concurrent processes communicating through shared variables is an often used model for hardware systems. This paper presents three modeling techniques for representing such shared variables in VHDL, depending on the acceptable constraints on accesses to the variables. Also a set...

  9. Defining Scenarios: Linking Integrated Models, Regional Concerns, and Stakeholders

    Science.gov (United States)

    Hartmann, H. C.; Stewart, S.; Liu, Y.; Mahmoud, M.

    2007-05-01

    Scenarios are important tools for long-term planning, and there is great interest in using integrated models in scenario studies. However, scenario definition and assessment are creative, as well as scientific, efforts. Using facilitated creative processes, we have worked with stakeholders to define regionally significant scenarios that encompass a broad range of hydroclimatic, socioeconomic, and institutional dimensions. The regional scenarios subsequently inform the definition of local scenarios that work with context-specific integrated models that, individually, can address only a subset of overall regional complexity. Based on concerns of stakeholders in the semi-arid US Southwest, we prioritized three dimensions that are especially important, yet highly uncertain, for long-term planning: hydroclimatic conditions (increased variability, persistent drought), development patterns (urban consolidation, distributed rural development), and the nature of public institutions (stressed, proactive). Linking across real-world decision contexts and integrated modeling efforts poses challenges of creatively connecting the conceptual models held by both the research and stakeholder communities.

  10. Defining Generic Architecture for Cloud Infrastructure as a Service model

    NARCIS (Netherlands)

    Demchenko, Y.; de Laat, C.

    2011-01-01

    Infrastructure as a Service (IaaS) is one of the provisioning models for Clouds as defined in the NIST Clouds definition. Although widely used, current IaaS implementations and solutions doesn’t have common and well defined architecture model. The paper attempts to define a generic architecture for

  11. Defining generic architecture for Cloud IaaS provisioning model

    NARCIS (Netherlands)

    Y. Demchenko; C. de Laat; A. Mavrin

    2011-01-01

    Infrastructure as a Service (IaaS) is one of the provisioning models for Clouds as defined in the NIST Clouds definition. Although widely used, current IaaS implementations and solutions doesn’t have common and well defined architecture model. The paper attempts to define a generic architecture for

  12. Bayesian modeling of measurement error in predictor variables

    NARCIS (Netherlands)

    Fox, Gerardus J.A.; Glas, Cornelis A.W.

    2003-01-01

    It is shown that measurement error in predictor variables can be modeled using item response theory (IRT). The predictor variables, that may be defined at any level of an hierarchical regression model, are treated as latent variables. The normal ogive model is used to describe the relation between

  13. MODELING SUPPLY CHAIN PERFORMANCE VARIABLES

    Directory of Open Access Journals (Sweden)

    Ashish Agarwal

    2005-01-01

    Full Text Available In order to understand the dynamic behavior of the variables that can play a major role in the performance improvement in a supply chain, a System Dynamics-based model is proposed. The model provides an effective framework for analyzing different variables affecting supply chain performance. Among different variables, a causal relationship among different variables has been identified. Variables emanating from performance measures such as gaps in customer satisfaction, cost minimization, lead-time reduction, service level improvement and quality improvement have been identified as goal-seeking loops. The proposed System Dynamics-based model analyzes the affect of dynamic behavior of variables for a period of 10 years on performance of case supply chain in auto business.

  14. Predicting Group-Level Outcome Variables from Variables Measured at the Individual Level: A Latent Variable Multilevel Model

    Science.gov (United States)

    Croon, Marcel A.; van Veldhoven, Marc J. P. M.

    2007-01-01

    In multilevel modeling, one often distinguishes between macro-micro and micro-macro situations. In a macro-micro multilevel situation, a dependent variable measured at the lower level is predicted or explained by variables measured at that lower or a higher level. In a micro-macro multilevel situation, a dependent variable defined at the higher…

  15. Defining Sentence Type: Further Evidence against Use of the Total Incarceration Variable

    Science.gov (United States)

    Harrington, Michael P.; Spohn, Cassia

    2007-01-01

    The effect of legal and extralegal factors on felony sentence outcomes has been widely studied, typically using a total incarceration variable that defines sentence outcomes as incarceration or probation. Research conducted by Holleran and Spohn has called this into question, revealing that factors that affected jail sentences were different than…

  16. Hybrid Unifying Variable Supernetwork Model

    Institute of Scientific and Technical Information of China (English)

    LIU; Qiang; FANG; Jin-qing; LI; Yong

    2015-01-01

    In order to compare new phenomenon of topology change,evolution,hybrid ratio and network characteristics of unified hybrid network theoretical model with unified hybrid supernetwork model,this paper constructed unified hybrid variable supernetwork model(HUVSM).The first layer introduces a hybrid ratio dr,the

  17. Variable cluster analysis method for building neural network model

    Institute of Scientific and Technical Information of China (English)

    王海东; 刘元东

    2004-01-01

    To address the problems that input variables should be reduced as much as possible and explain output variables fully in building neural network model of complicated system, a variable selection method based on cluster analysis was investigated. Similarity coefficient which describes the mutual relation of variables was defined. The methods of the highest contribution rate, part replacing whole and variable replacement are put forwarded and deduced by information theory. The software of the neural network based on cluster analysis, which can provide many kinds of methods for defining variable similarity coefficient, clustering system variable and evaluating variable cluster, was developed and applied to build neural network forecast model of cement clinker quality. The results show that all the network scale, training time and prediction accuracy are perfect. The practical application demonstrates that the method of selecting variables for neural network is feasible and effective.

  18. IN THE MAZE OF E-COMMERCE. ONLINE TRADE DEFINING VARIABLES IN ROMANIA

    Directory of Open Access Journals (Sweden)

    Erika KULCSÁR

    2017-05-01

    Full Text Available The number of those articles dealing with the issue of online trade is significant both at international and national level. Among the main identified themes addressed in this present article are the following: (a. the characteristics that define the segment of those who purchase via the Internet, (b. the influencing factors which play a crucial role at purchases made online, (c. the identification of those variables through which online consumer behavior can be studied (d. the advantages offered by the Internet, and therefore by online trade. The purpose of this article is to understand and know the buying habits of online customers. The main variables included in the analysis are the following: (1 type of customer, (2 customers’ residency, (3 the day of the online order, (4 time interval/time frame when the order was placed (4 ordered brands, (5 the average value of orders.

  19. High Variability Is a Defining Component of Mediterranean-Climate Rivers and Their Biota

    Directory of Open Access Journals (Sweden)

    Núria Cid

    2017-01-01

    Full Text Available Variability in flow as a result of seasonal precipitation patterns is a defining element of streams and rivers in Mediterranean-climate regions of the world and strongly influences the biota of these unique systems. Mediterranean-climate areas include the Mediterranean Basin and parts of Australia, California, Chile, and South Africa. Mediterranean streams and rivers can experience wet winters and consequent floods to severe droughts, when intermittency in otherwise perennial systems can occur. Inter-annual variation in precipitation can include multi-year droughts or consecutive wet years. Spatial variation in patterns of precipitation (rain vs. snow combined with topographic variability lead to spatial variability in hydrologic patterns that influence populations and communities. Mediterranean streams and rivers are global biodiversity hotspots and are particularly vulnerable to human impacts. Biomonitoring, conservation efforts, and management responses to climate change require approaches that account for spatial and temporal variability (including both intra- and inter-annual. The importance of long-term data sets for understanding and managing these systems highlights the need for sustained and coordinated research efforts in Mediterranean-climate streams and rivers.

  20. Process Model for Defining Space Sensing and Situational Awareness Requirements

    Science.gov (United States)

    2006-04-01

    process model for defining systems for space sensing and space situational awareness is presented. The paper concentrates on eight steps for determining the requirements to include: decision maker needs, system requirements, exploitation methods and vulnerabilities, critical capabilities, and identify attack scenarios. Utilization of the USAF anti-tamper (AT) implementation process as a process model departure point for the space sensing and situational awareness (SSSA...is presented. The AT implementation process model , as an

  1. Concomitant variables in finite mixture models

    NARCIS (Netherlands)

    Wedel, M

    The standard mixture model, the concomitant variable mixture model, the mixture regression model and the concomitant variable mixture regression model all enable simultaneous identification and description of groups of observations. This study reviews the different ways in which dependencies among

  2. Optimization of the Actuarial Model of Defined Contribution Pension Plan

    Directory of Open Access Journals (Sweden)

    Yan Li

    2014-01-01

    Full Text Available The paper focuses on the actuarial models of defined contribution pension plan. Through assumptions and calculations, the expected replacement ratios of three different defined contribution pension plans are compared. Specially, more significant considerable factors are put forward in the further cost and risk analyses. In order to get an assessment of current status, the paper finds a relationship between the replacement ratio and the pension investment rate using econometrics method. Based on an appropriate investment rate of 6%, an expected replacement ratio of 20% is reached.

  3. Human vascular model with defined stimulation medium - a characterization study.

    Science.gov (United States)

    Huttala, Outi; Vuorenpää, Hanna; Toimela, Tarja; Uotila, Jukka; Kuokkanen, Hannu; Ylikomi, Timo; Sarkanen, Jertta-Riina; Heinonen, Tuula

    2015-01-01

    The formation of blood vessels is a vital process in embryonic development and in normal physiology. Current vascular modelling is mainly based on animal biology leading to species-to-species variation when extrapolating the results to humans. Although there are a few human cell based vascular models available these assays are insufficiently characterized in terms of culture conditions and developmental stage of vascular structures. Therefore, well characterized vascular models with human relevance are needed for basic research, embryotoxicity testing, development of therapeutic strategies and for tissue engineering. We have previously shown that the in vitro vascular model based on co-culture of human adipose stromal cells (hASC) and human umbilical vein endothelial cells (HUVEC) is able to induce an extensive vascular-like network with high reproducibility. In this work we developed a defined serum-free vascular stimulation medium (VSM) and performed further characterization in terms of cell identity, maturation and structure to obtain a thoroughly characterized in vitro vascular model to replace or reduce corresponding animal experiments. The results showed that the novel vascular stimulation medium induced intact and evenly distributed vascular-like network with morphology of mature vessels. Electron microscopic analysis assured the three-dimensional microstructure of the network containing lumen. Additionally, elevated expressions of the main human angiogenesis-related genes were detected. In conclusion, with the new defined medium the vascular model can be utilized as a characterized test system for chemical testing as well as in creating vascularized tissue models.

  4. Defining the Sudden Stratospheric Warming in Climate Models

    Science.gov (United States)

    Kim, J.; Son, S. W.; Gerber, E. P.; Park, H. S.

    2016-12-01

    A sudden stratospheric warming (SSW) is defined by the World Meteorological Organization (WMO) as zonal-mean zonal wind reversal at 10 hPa and 60°N, associated with a reversal of the climatological temperature gradient at this elevation. This wind criterion in particular has been applied to reanalysis data and climate model output during the last few decades. In the present study, it is shown that the application of this definition to models can be affected by model mean biases; i.e., more frequent SSW appears to occur in models with a weaker climatological polar vortex. In order to overcome this deficiency, a tendency-based definition, which is not sensitive to the model mean bias, is proposed and applied to the multi-model data sets archived for the Coupled Model Intercomparison Projection phase 5 (CMIP5). In this definition, SSW-like events are defined by sufficiently strong vortex deceleration. This approach removes a linear relationship between the SSW frequency and intensity of climatological polar vortex for both the low-top and high-top CMIP5 models. Instead, the resulting SSW frequency is strongly correlated with wave activity at 100 hPa. The two definitions detect quantitatively different SSW in terms of lower stratospheric wave activity and downward propagation of stratospheric anomalies to the troposphere. However, in both definitions, the high-top models generally exhibit more frequent SSW than the low-top models. Moreover, a hint of more frequent SSW in a warm climate is commonly found.

  5. Process analysis and optimization models defining recultivation surface mines

    Directory of Open Access Journals (Sweden)

    Dimitrijević Bojan V.

    2015-01-01

    Full Text Available Surface mines are generally open and very dynamic systems influenced by a large number of technical, economic, environmental and safety factors and limitations in all stages of the life cycle. In this paper the dynamic compliance period surface mining phases and of the reclamation. Also, an analysis of the reclamation of surface mines, and process flow management project recultivation is determined by the principled management model reclamation. The analysis of the planning process is defined by the model optimization recultivation surface mine.

  6. Phytoplankton community structure defined by key environmental variables in Tagus estuary, Portugal.

    Science.gov (United States)

    Brogueira, Maria José; Oliveira, Maria do Rosário; Cabeçadas, Graça

    2007-12-01

    In this work, we analyze environmental (physical and chemical) and biological (phytoplankton) data obtained along Tagus estuary during three surveys, carried out in productive period (May/June/July) at ebb tide. The main objective of this study was to identify the key environmental factors affecting phytoplankton structure in the estuary. BIOENV analysis revealed that, in study period, temperature, salinity, silicate and total phosphorus were the variables that best explained the phytoplankton spatial pattern in the estuary (Spearman correlation, rho=0.803). A generalized linear model (GLM) also identified salinity, silicate and phosphate as having a high explanatory power (63%) of phytoplankton abundance. These selected nutrients appear to be consistent with the requirements of the dominant phytoplankton group, Baccilariophyceae. Apparently, phytoplankton community is adapted to fluctuations in light intensity, as suspended particulate matter did not come out as a key factor in shaping phytoplankton structure along Tagus estuary.

  7. Rainfall variability modelling in Rwanda

    Science.gov (United States)

    Nduwayezu, E.; Kanevski, M.; Jaboyedoff, M.

    2012-04-01

    Support to climate change adaptation is a priority in many International Organisations meetings. But is the international approach for adaptation appropriate with field reality in developing countries? In Rwanda, the main problems will be heavy rain and/or long dry season. Four rainfall seasons have been identified, corresponding to the four thermal Earth ones in the south hemisphere: the normal season (summer), the rainy season (autumn), the dry season (winter) and the normo-rainy season (spring). The spatial rainfall decreasing from West to East, especially in October (spring) and February (summer) suggests an «Atlantic monsoon influence» while the homogeneous spatial rainfall distribution suggests an «Inter-tropical front » mechanism. The torrential rainfall that occurs every year in Rwanda disturbs the circulation for many days, damages the houses and, more seriously, causes heavy losses of people. All districts are affected by bad weather (heavy rain) but the costs of such events are the highest in mountains districts. The objective of the current research is to proceed to an evaluation of the potential rainfall risk by applying advanced geospatial modelling tools in Rwanda: geostatistical predictions and simulations, machine learning algorithm (different types of neural networks) and GIS. The research will include rainfalls variability mapping and probabilistic analyses of extreme events.

  8. Limited dependent variable models for panel data

    NARCIS (Netherlands)

    Charlier, E.

    1997-01-01

    Many economic phenomena require limited variable models for an appropriate treatment. In addition, panel data models allow the inclusion of unobserved individual-specific effects. These models are combined in this thesis. Distributional assumptions in the limited dependent variable models are

  9. BEYOND SEM: GENERAL LATENT VARIABLE MODELING

    National Research Council Canada - National Science Library

    Muthén, Bengt O

    2002-01-01

    This article gives an overview of statistical analysis with latent variables. Using traditional structural equation modeling as a starting point, it shows how the idea of latent variables captures a wide variety of statistical concepts...

  10. Cardinality-dependent Variability in Orthogonal Variability Models

    DEFF Research Database (Denmark)

    Mærsk-Møller, Hans Martin; Jørgensen, Bo Nørregaard

    2012-01-01

    During our work on developing and running a software product line for eco-sustainable greenhouse-production software tools, which currently have three products members we have identified a need for extending the notation of the Orthogonal Variability Model (OVM) to support what we refer to as car......During our work on developing and running a software product line for eco-sustainable greenhouse-production software tools, which currently have three products members we have identified a need for extending the notation of the Orthogonal Variability Model (OVM) to support what we refer...

  11. Towards an ontological model defining the social engineering domain

    CSIR Research Space (South Africa)

    Mouton, F

    2014-08-01

    Full Text Available information. Although Social Engineering is an important branch of Information Security, the discipline is not well defined; a number of different definitions appear in the literature. Several concepts in the domain of Social Engineering are defined...

  12. Variable Fidelity Aeroelastic Toolkit - Structural Model Project

    Data.gov (United States)

    National Aeronautics and Space Administration — The proposed innovation is a methodology to incorporate variable fidelity structural models into steady and unsteady aeroelastic and aeroservoelastic analyses in...

  13. Gaussian Process Structural Equation Models with Latent Variables

    CERN Document Server

    Silva, Ricardo

    2010-01-01

    In a variety of disciplines such as social sciences, psychology, medicine and economics, the recorded data are considered to be noisy measurements of latent variables connected by some causal structure. This corresponds to a family of graphical models known as the structural equation model with latent variables. While linear non-Gaussian variants have been well-studied, inference in nonparametric structural equation models is still underdeveloped. We introduce a sparse Gaussian process parameterization that defines a non-linear structure connecting latent variables, unlike common formulations of Gaussian process latent variable models. An efficient Markov chain Monte Carlo procedure is described. We evaluate the stability of the sampling procedure and the predictive ability of the model compared against the current practice.

  14. Handbook of latent variable and related models

    CERN Document Server

    Lee, Sik-Yum

    2011-01-01

    This Handbook covers latent variable models, which are a flexible class of models for modeling multivariate data to explore relationships among observed and latent variables.- Covers a wide class of important models- Models and statistical methods described provide tools for analyzing a wide spectrum of complicated data- Includes illustrative examples with real data sets from business, education, medicine, public health and sociology.- Demonstrates the use of a wide variety of statistical, computational, and mathematical techniques.

  15. Optical Test of Local Hidden-Variable Model

    Institute of Scientific and Technical Information of China (English)

    WU XiaoHua; ZONG HongShi; PANG HouRong

    2001-01-01

    An inequality is deduced from local realism and a supplementary assumption. This inequality defines an experiment that can be actually performed with the present technology to test local hidden-variable models, and it is violated by quantum mechanics with a factor 1.92, while it can be simplified into a form where just two measurements are required.``

  16. Bayesian modeling of measurement error in predictor variables using item response theory

    NARCIS (Netherlands)

    Fox, Jean-Paul; Glas, Cees A.W.

    2003-01-01

    It is shown that measurement error in predictor variables can be modeled using item response theory (IRT). The predictor variables, that may be defined at any level of an hierarchical regression model, are treated as latent variables. The normal ogive model is used to describe the relation between t

  17. Efficient family-based model checking via variability abstractions

    DEFF Research Database (Denmark)

    Dimovski, Aleksandar; Al-Sibahi, Ahmad Salim; Brabrand, Claus

    2016-01-01

    variational models using the standard version of (single-system) Spin. The variability abstractions are first defined as Galois connections on semantic domains. We then show how to use them for defining abstract family-based model checking, where a variability model is replaced with an abstract version of it......Many software systems are variational: they can be configured to meet diverse sets of requirements. They can produce a (potentially huge) number of related systems, known as products or variants, by systematically reusing common parts. For variational models (variational systems or families...... of related systems), specialized family-based model checking algorithms allow efficient verification of multiple variants, simultaneously, in a single run. These algorithms, implemented in a tool Snip, scale much better than ``the brute force'' approach, where all individual systems are verified using...

  18. Experimental falsification of Leggett's nonlocal variable model.

    Science.gov (United States)

    Branciard, Cyril; Ling, Alexander; Gisin, Nicolas; Kurtsiefer, Christian; Lamas-Linares, Antia; Scarani, Valerio

    2007-11-23

    Bell's theorem guarantees that no model based on local variables can reproduce quantum correlations. Also, some models based on nonlocal variables, if subject to apparently "reasonable" constraints, may fail to reproduce quantum physics. In this Letter, we introduce a family of inequalities, which use a finite number of measurement settings, and which therefore allow testing Leggett's nonlocal model versus quantum physics. Our experimental data falsify Leggett's model and are in agreement with quantum predictions.

  19. Variable impact on mortality of AIDS-defining events diagnosed during combination antiretroviral therapy

    DEFF Research Database (Denmark)

    Mocroft, Amanda; Sterne, Jonathan A C; Egger, Matthias

    2009-01-01

    BACKGROUND: The extent to which mortality differs following individual acquired immunodeficiency syndrome (AIDS)-defining events (ADEs) has not been assessed among patients initiating combination antiretroviral therapy. METHODS: We analyzed data from 31,620 patients with no prior ADEs who started...

  20. Defining Requirements and Applying Information Modeling for Protecting Enterprise Assets

    Science.gov (United States)

    Fortier, Stephen C.; Volk, Jennifer H.

    The advent of terrorist threats has heightened local, regional, and national governments' interest in emergency response and disaster preparedness. The threat of natural disasters also challenges emergency responders to act swiftly and in a coordinated fashion. When a disaster occurs, an ad hoc coalition of pre-planned groups usually forms to respond to the incident. History has shown that these “system of systems” do not interoperate very well. Communications between fire, police and rescue components either do not work or are inefficient. Government agencies, non-governmental organizations (NGOs), and private industry use a wide array of software platforms for managing data about emergency conditions, resources and response activities. Most of these are stand-alone systems with very limited capability for data sharing with other agencies or other levels of government. Information technology advances have facilitated the movement towards an integrated and coordinated approach to emergency management. Other communication mechanisms, such as video teleconferencing, digital television and radio broadcasting, are being utilized to combat the challenges of emergency information exchange. Recent disasters, such as Hurricane Katrina and the tsunami in Indonesia, have illuminated the weaknesses in emergency response. This paper will discuss the need for defining requirements for components of ad hoc coalitions which are formed to respond to disasters. A goal of our effort was to develop a proof of concept that applying information modeling to the business processes used to protect and mitigate potential loss of an enterprise was feasible. These activities would be modeled both pre- and post-incident.

  1. Decision variables analysis for structured modeling

    Institute of Scientific and Technical Information of China (English)

    潘启树; 赫东波; 张洁; 胡运权

    2002-01-01

    Structured modeling is the most commonly used modeling method, but it is not quite addaptive to significant changes in environmental conditions. Therefore, Decision Variables Analysis(DVA), a new modelling method is proposed to deal with linear programming modeling and changing environments. In variant linear programming , the most complicated relationships are those among decision variables. DVA classifies the decision variables into different levels using different index sets, and divides a model into different elements so that any change can only have its effect on part of the whole model. DVA takes into consideration the complicated relationships among decision variables at different levels, and can therefore sucessfully solve any modeling problem in dramatically changing environments.

  2. Generalized latent variable modeling multilevel, longitudinal, and structural equation models

    CERN Document Server

    Skrondal, Anders

    2004-01-01

    METHODOLOGY THE OMNI-PRESENCE OF LATENT VARIABLES Introduction 'True' variable measured with error Hypothetical constructs Unobserved heterogeneity Missing values and counterfactuals Latent responses Generating flexible distributions Combining information Summary MODELING DIFFERENT RESPONSE PROCESSES Introduction Generalized linear models Extensions of generalized linear models Latent response formulation Modeling durations or survival Summary and further reading CLASSICAL LATENT VARIABLE MODELS Introduction Multilevel regression models Factor models and item respons

  3. Regions of variability for a class of analytic and locally univalent functions defined by subordination

    Indian Academy of Sciences (India)

    Bappaditya Bhowmik

    2015-11-01

    In this article, we consider a family $\\mathcal{C}(A,B)$ of analytic and locally univalent functions on the open unit disc $\\mathbb{D} = {z : |z| < 1}$ in the complex plane that properly contains the well-known Janowski class of convex univalent functions. In this article, we determine the exact set of variability of log$(f'(z_0))$ with fixed $z_0 \\in \\mathbb{D}$ and $f''(0)$ whenever varies over the class $\\mathcal{C}(A,B)$.

  4. Defining autism: variability in state education agency definitions of and evaluations for autism spectrum disorders.

    Science.gov (United States)

    Pennington, Malinda L; Cullinan, Douglas; Southern, Louise B

    2014-01-01

    In light of the steady rise in the prevalence of students with autism, this study examined the definition of autism published by state education agencies (SEAs), as well as SEA-indicated evaluation procedures for determining student qualification for autism. We compared components of each SEA definition to aspects of autism from two authoritative sources: Diagnostic and Statistical Manual of Mental Disorders (DSM-IV-TR) and Individuals with Disabilities Education Improvement Act (IDEA-2004). We also compared SEA-indicated evaluation procedures across SEAs to evaluation procedures noted in IDEA-2004. Results indicated that many more SEA definitions incorporate IDEA-2004 features than DSM-IV-TR features. However, despite similar foundations, SEA definitions of autism displayed considerable variability. Evaluation procedures were found to vary even more across SEAs. Moreover, within any particular SEA there often was little concordance between the definition (what autism is) and evaluation procedures (how autism is recognized). Recommendations for state and federal policy changes are discussed.

  5. Gas permeation measurement under defined humidity via constant volume/variable pressure method

    KAUST Repository

    Jan Roman, Pauls

    2012-02-01

    Many industrial gas separations in which membrane processes are feasible entail high water vapour contents, as in CO 2-separation from flue gas in carbon capture and storage (CCS), or in biogas/natural gas processing. Studying the effect of water vapour on gas permeability through polymeric membranes is essential for materials design and optimization of these membrane applications. In particular, for amine-based CO 2 selective facilitated transport membranes, water vapour is necessary for carrier-complex formation (Matsuyama et al., 1996; Deng and Hägg, 2010; Liu et al., 2008; Shishatskiy et al., 2010) [1-4]. But also conventional polymeric membrane materials can vary their permeation behaviour due to water-induced swelling (Potreck, 2009) [5]. Here we describe a simple approach to gas permeability measurement in the presence of water vapour, in the form of a modified constant volume/variable pressure method (pressure increase method). © 2011 Elsevier B.V.

  6. Random Effect and Latent Variable Model Selection

    CERN Document Server

    Dunson, David B

    2008-01-01

    Presents various methods for accommodating model uncertainty in random effects and latent variable models. This book focuses on frequentist likelihood ratio and score tests for zero variance components. It also focuses on Bayesian methods for random effects selection in linear mixed effects and generalized linear mixed models

  7. Sampling Weights in Latent Variable Modeling

    Science.gov (United States)

    Asparouhov, Tihomir

    2005-01-01

    This article reviews several basic statistical tools needed for modeling data with sampling weights that are implemented in Mplus Version 3. These tools are illustrated in simulation studies for several latent variable models including factor analysis with continuous and categorical indicators, latent class analysis, and growth models. The…

  8. Mapping and defining sources of variability in bioavailable strontium isotope ratios in the Eastern Mediterranean

    Science.gov (United States)

    Hartman, Gideon; Richards, Mike

    2014-02-01

    The relative contributions of bedrock and atmospheric sources to bioavailable strontium (Sr) pools in local soils was studied in Northern Israel and the Golan regions through intensive systematic sampling of modern plants and invertebrates, to produce a map of modern bioavailable strontium isotope ratios (87Sr/86Sr) for regional reconstructions of human and animal mobility patterns. The study investigates sources of variability in bioavailable 87Sr/86Sr ratios, in particular the intra-and inter-site range of variation in plant 87Sr/86Sr ratios, the range of 87Sr/86Sr ratios of plants growing on marine sedimentary versus volcanic geologies, the differences between ligneous and non-ligneous plants with varying growth and water utilization strategies, and the relative contribution of atmospheric Sr sources from different soil and vegetation types and climatic zones. Results indicate predictable variation in 87Sr/86Sr ratios. Inter- and intra-site differences in bioavailable 87Sr/86Sr ratios average of 0.00025, while the range of 87Sr/86Sr ratios measured regionally in plants and invertebrates is 0.7090 in Pleistocene calcareous sandstone and 0.7074 in mid-Pleistocene volcanic pyroclast. The 87Sr/86Sr ratios measured in plants growing on volcanic bedrock show time dependent increases in atmospheric deposition relative to bedrock weathering. The 87Sr/86Sr ratios measured in plants growing on renzina soils depends on precipitation. The spacing between bedrock 87Sr/86Sr ratios and plants is highest in wet conditions and decreases in dry conditions. The 87Sr/86Sr ratios measured in plants growing on terra rossa soils is relatively constant (0.7085) regardless of precipitation. Ligneous plants are typically closer to bedrock 87Sr/86Sr ratios than non-ligneous plants. Since the bioavailable 87Sr/86Sr ratios currently measured in the region reflect a mix of both exogenous and endogenous sources, changes in the relative contribution of exogenous sources can cause variation

  9. A Model for Positively Correlated Count Variables

    DEFF Research Database (Denmark)

    Møller, Jesper; Rubak, Ege Holger

    2010-01-01

    An α-permanental random field is briefly speaking a model for a collection of non-negative integer valued random variables with positive associations. Though such models possess many appealing probabilistic properties, many statisticians seem unaware of α-permanental random fields and their poten......An α-permanental random field is briefly speaking a model for a collection of non-negative integer valued random variables with positive associations. Though such models possess many appealing probabilistic properties, many statisticians seem unaware of α-permanental random fields...

  10. Defining Autism: Variability in State Education Agency Definitions of and Evaluations for Autism Spectrum Disorders

    Directory of Open Access Journals (Sweden)

    Malinda L. Pennington

    2014-01-01

    Full Text Available In light of the steady rise in the prevalence of students with autism, this study examined the definition of autism published by state education agencies (SEAs, as well as SEA-indicated evaluation procedures for determining student qualification for autism. We compared components of each SEA definition to aspects of autism from two authoritative sources: Diagnostic and Statistical Manual of Mental Disorders (DSM-IV-TR and Individuals with Disabilities Education Improvement Act (IDEA-2004. We also compared SEA-indicated evaluation procedures across SEAs to evaluation procedures noted in IDEA-2004. Results indicated that many more SEA definitions incorporate IDEA-2004 features than DSM-IV-TR features. However, despite similar foundations, SEA definitions of autism displayed considerable variability. Evaluation procedures were found to vary even more across SEAs. Moreover, within any particular SEA there often was little concordance between the definition (what autism is and evaluation procedures (how autism is recognized. Recommendations for state and federal policy changes are discussed.

  11. 47 CFR 76.1904 - Encoding rules for defined business models.

    Science.gov (United States)

    2010-10-01

    ... 47 Telecommunication 4 2010-10-01 2010-10-01 false Encoding rules for defined business models. 76... defined business models. (a) Commercial audiovisual content delivered as unencrypted broadcast television... the Commission pursuant to a petition with respect to a defined business model other than unencrypted...

  12. Integrating models that depend on variable data

    Science.gov (United States)

    Banks, A. T.; Hill, M. C.

    2016-12-01

    Models of human-Earth systems are often developed with the goal of predicting the behavior of one or more dependent variables from multiple independent variables, processes, and parameters. Often dependent variable values range over many orders of magnitude, which complicates evaluation of the fit of the dependent variable values to observations. Many metrics and optimization methods have been proposed to address dependent variable variability, with little consensus being achieved. In this work, we evaluate two such methods: log transformation (based on the dependent variable being log-normally distributed with a constant variance) and error-based weighting (based on a multi-normal distribution with variances that tend to increase as the dependent variable value increases). Error-based weighting has the advantage of encouraging model users to carefully consider data errors, such as measurement and epistemic errors, while log-transformations can be a black box for typical users. Placing the log-transformation into the statistical perspective of error-based weighting has not formerly been considered, to the best of our knowledge. To make the evaluation as clear and reproducible as possible, we use multiple linear regression (MLR). Simulations are conducted with MatLab. The example represents stream transport of nitrogen with up to eight independent variables. The single dependent variable in our example has values that range over 4 orders of magnitude. Results are applicable to any problem for which individual or multiple data types produce a large range of dependent variable values. For this problem, the log transformation produced good model fit, while some formulations of error-based weighting worked poorly. Results support previous suggestions fthat error-based weighting derived from a constant coefficient of variation overemphasizes low values and degrades model fit to high values. Applying larger weights to the high values is inconsistent with the log

  13. Using structural equation modeling to investigate relationships among ecological variables

    Science.gov (United States)

    Malaeb, Z.A.; Kevin, Summers J.; Pugesek, B.H.

    2000-01-01

    Structural equation modeling is an advanced multivariate statistical process with which a researcher can construct theoretical concepts, test their measurement reliability, hypothesize and test a theory about their relationships, take into account measurement errors, and consider both direct and indirect effects of variables on one another. Latent variables are theoretical concepts that unite phenomena under a single term, e.g., ecosystem health, environmental condition, and pollution (Bollen, 1989). Latent variables are not measured directly but can be expressed in terms of one or more directly measurable variables called indicators. For some researchers, defining, constructing, and examining the validity of latent variables may be the end task of itself. For others, testing hypothesized relationships of latent variables may be of interest. We analyzed the correlation matrix of eleven environmental variables from the U.S. Environmental Protection Agency's (USEPA) Environmental Monitoring and Assessment Program for Estuaries (EMAP-E) using methods of structural equation modeling. We hypothesized and tested a conceptual model to characterize the interdependencies between four latent variables-sediment contamination, natural variability, biodiversity, and growth potential. In particular, we were interested in measuring the direct, indirect, and total effects of sediment contamination and natural variability on biodiversity and growth potential. The model fit the data well and accounted for 81% of the variability in biodiversity and 69% of the variability in growth potential. It revealed a positive total effect of natural variability on growth potential that otherwise would have been judged negative had we not considered indirect effects. That is, natural variability had a negative direct effect on growth potential of magnitude -0.3251 and a positive indirect effect mediated through biodiversity of magnitude 0.4509, yielding a net positive total effect of 0

  14. Conceptual model for assessment of inhalation exposure: Defining modifying factors

    NARCIS (Netherlands)

    Tielemans, E.; Schneider, T.; Goede, H.; Tischer, M.; Warren, N.; Kromhout, H.; Tongeren, M. van; Hemmen, J. van; Cherrie, J.W.

    2008-01-01

    The present paper proposes a source-receptor model to schematically describe inhalation exposure to help understand the complex processes leading to inhalation of hazardous substances. The model considers a stepwise transfer of a contaminant from the source to the receptor. The conceptual model is c

  15. Nutritional models for space travel from chemically defined diets

    Science.gov (United States)

    Dufour, P. A.

    1984-01-01

    Human nutritional requirements are summarized, including recommended daily intake and maximum safe chronic intake of nutrients. The biomedical literature on various types of chemically defined diets (CDD's), which are liquid, formulated diets for enteral and total parenteral nutrition, is reviewed. The chemical forms of the nutrients in CDD's are detailed, and the compositions and sources of representative commercial CDD's are tabulated. Reported effects of CDD's in medical patients, healthy volunteers, and laboratory animals are discussed. The effects include gastrointestinal side effects, metabolic imbalances, nutrient deficiencies and excesses, and psychological problems. Dietary factors contributing to the side effects are examined. Certain human nutrient requirements have been specified more precisely as a result of long-term use of CDD's, and related studies are included. CDD's are the most restricted yet nutritionally complete diets available.

  16. Gait variability: methods, modeling and meaning

    Directory of Open Access Journals (Sweden)

    Hausdorff Jeffrey M

    2005-07-01

    Full Text Available Abstract The study of gait variability, the stride-to-stride fluctuations in walking, offers a complementary way of quantifying locomotion and its changes with aging and disease as well as a means of monitoring the effects of therapeutic interventions and rehabilitation. Previous work has suggested that measures of gait variability may be more closely related to falls, a serious consequence of many gait disorders, than are measures based on the mean values of other walking parameters. The Current JNER series presents nine reports on the results of recent investigations into gait variability. One novel method for collecting unconstrained, ambulatory data is reviewed, and a primer on analysis methods is presented along with a heuristic approach to summarizing variability measures. In addition, the first studies of gait variability in animal models of neurodegenerative disease are described, as is a mathematical model of human walking that characterizes certain complex (multifractal features of the motor control's pattern generator. Another investigation demonstrates that, whereas both healthy older controls and patients with a higher-level gait disorder walk more slowly in reduced lighting, only the latter's stride variability increases. Studies of the effects of dual tasks suggest that the regulation of the stride-to-stride fluctuations in stride width and stride time may be influenced by attention loading and may require cognitive input. Finally, a report of gait variability in over 500 subjects, probably the largest study of this kind, suggests how step width variability may relate to fall risk. Together, these studies provide new insights into the factors that regulate the stride-to-stride fluctuations in walking and pave the way for expanded research into the control of gait and the practical application of measures of gait variability in the clinical setting.

  17. Cartographic Modeling: Computer-assisted Analysis of Spatially Defined Neighborhoods

    Science.gov (United States)

    Berry, J. K.; Tomlin, C. D.

    1982-01-01

    Cartographic models addressing a wide variety of applications are composed of fundamental map processing operations. These primitive operations are neither data base nor application-specific. By organizing the set of operations into a mathematical-like structure, the basis for a generalized cartographic modeling framework can be developed. Among the major classes of primitive operations are those associated with reclassifying map categories, overlaying maps, determining distance and connectivity, and characterizing cartographic neighborhoods. The conceptual framework of cartographic modeling is established and techniques for characterizing neighborhoods are used as a means of demonstrating some of the more sophisticated procedures of computer-assisted map analysis. A cartographic model for assessing effective roundwood supply is briefly described as an example of a computer analysis. Most of the techniques described have been implemented as part of the map analysis package developed at the Yale School of Forestry and Environmental Studies.

  18. Gaussian mixture model of heart rate variability.

    Directory of Open Access Journals (Sweden)

    Tommaso Costa

    Full Text Available Heart rate variability (HRV is an important measure of sympathetic and parasympathetic functions of the autonomic nervous system and a key indicator of cardiovascular condition. This paper proposes a novel method to investigate HRV, namely by modelling it as a linear combination of Gaussians. Results show that three Gaussians are enough to describe the stationary statistics of heart variability and to provide a straightforward interpretation of the HRV power spectrum. Comparisons have been made also with synthetic data generated from different physiologically based models showing the plausibility of the Gaussian mixture parameters.

  19. Bayesian variable selection for latent class models.

    Science.gov (United States)

    Ghosh, Joyee; Herring, Amy H; Siega-Riz, Anna Maria

    2011-09-01

    In this article, we develop a latent class model with class probabilities that depend on subject-specific covariates. One of our major goals is to identify important predictors of latent classes. We consider methodology that allows estimation of latent classes while allowing for variable selection uncertainty. We propose a Bayesian variable selection approach and implement a stochastic search Gibbs sampler for posterior computation to obtain model-averaged estimates of quantities of interest such as marginal inclusion probabilities of predictors. Our methods are illustrated through simulation studies and application to data on weight gain during pregnancy, where it is of interest to identify important predictors of latent weight gain classes.

  20. Defining and implementing a model for pharmacy resident research projects

    Directory of Open Access Journals (Sweden)

    Dick TB

    2015-09-01

    Full Text Available Objective: To describe a standard approach to provide a support structure for pharmacy resident research that emphasizes self-identification of a residency research project. Methods: A subcommittee of the residency advisory committee was formed at our institution. The committee was initially comprised of 2 clinical pharmacy specialists, 1 drug information pharmacist, and 2 pharmacy administrators. The committee developed research guidelines that are distributed to residents prior to the residency start that detail the research process, important deadlines, and available resources. Instructions for institutional review board (IRB training and deadlines for various assignments and presentations throughout the residency year are clearly defined. Residents conceive their own research project and emphasis is placed on completing assignments early in the residency year. Results: In the 4 years this research process has been in place, 15 of 16 (94% residents successfully identified their own research question. All 15 residents submitted a complete research protocol to the IRB by the August deadline. Four residents have presented the results of their research at multi-disciplinary national professional meetings and 1 has published a manuscript. Feedback from outgoing residents has been positive overall and their perceptions of their research projects and the process are positive. Conclusion: Pharmacy residents selecting their own research projects for their residency year is a feasible alternative to assigning or providing lists of research projects from which to select a project.

  1. Racing to define pharmaceutical R&D external innovation models.

    Science.gov (United States)

    Wang, Liangsu; Plump, Andrew; Ringel, Michael

    2015-03-01

    The pharmaceutical industry continues to face fundamental challenges because of issues with research and development (R&D) productivity and rising customer expectations. To lower R&D costs, move beyond me-too therapies, and create more transformative portfolios, pharmaceutical companies are actively capitalizing on external innovation through precompetitive collaboration with academia, cultivation of biotech start-ups, and proactive licensing and acquisitions. Here, we review the varying innovation strategies used by pharmaceutical companies, compare and contrast these models, and identify the trends in external innovation. We also discuss factors that influence these external innovation models and propose a preliminary set of metrics that could be used as leading indicators of success.

  2. On the Use of Variability Operations in the V-Modell XT Software Process Line

    DEFF Research Database (Denmark)

    Kuhrmann, Marco; Méndez Fernández, Daniel; Ternité, Thomas

    2016-01-01

    . In this article, we present a study on the feasibility of variability operations to support the development of software process lines in the context of the V-Modell XT. We analyze which variability operations are defined and practically used. We provide an initial catalog of variability operations......Software process lines provide a systematic approach to develop and manage software processes. It defines a reference process containing general process assets, whereas a well-defined customization approach allows process engineers to create new process variants, e.g., by extending or modifying...... as an improvement proposal for other process models. Our findings show that 69 variability operation types are defined across several metamodel versions of which, however, 25 remain unused. The found variability operations allow for systematically modifying the content of process model elements and the process...

  3. Modeling stakeholder-defined climate risk on the Upper Great Lakes

    Science.gov (United States)

    Moody, Paul; Brown, Casey

    2012-10-01

    Climate change is believed to pose potential risks to the stakeholders of the Great Lakes due to changes in lake levels. This paper presents a model of stakeholder-defined risk as a function of climate change. It describes the development of a statistical model that links water resources system performance and climate changes developed for the Great Lakes of North America. The function is used in a process that links bottom-up water system vulnerability assessment to top-down climate change information. Vulnerabilities are defined based on input from stakeholders and resource experts and are used to determine system performance thresholds. These thresholds are used to measure performance over a wide range of climate changes mined from a large (55,590 year) stochastic data set. The performance and climate conditions are used to create a climate response function, a statistical model to predict lake performance based on climate statistics. This function facilitates exploration and analysis of performance over a wide range of climate conditions. It can also be used to estimate risk associated with change in climate mean and variability resulting from climate change. Problematic changes in climate can be identified and the probability of those conditions estimated using climate projections or other sources of climate information. The function can also be used to evaluate the robustness of a regulation plan and to compare performance of alternate plans. This paper demonstrates the utility of the climate response function as applied within the context of the International Upper Great Lakes Study.

  4. Prognostic significance of novel {sup 18}F-FDG PET/CT defined tumour variables in patients with oesophageal cancer

    Energy Technology Data Exchange (ETDEWEB)

    Foley, Kieran G., E-mail: kfoley@doctors.org.uk [Department of Radiology, University Hospital of Wales, Cardiff (United Kingdom); Fielding, Patrick, E-mail: Patrick.Fielding@wales.nhs.uk [Department of Wales Research and Diagnostic Positron Emission Tomography Imaging Centre (PETIC), University Hospital of Wales, Cardiff (United Kingdom); Lewis, Wyn G., E-mail: Wyn.Lewis4@wales.nhs.uk [Department of Surgery, University Hospital of Wales, Cardiff (United Kingdom); Karran, Alex, E-mail: alex_karran@hotmail.co.uk [Department of Surgery, University Hospital of Wales, Cardiff (United Kingdom); Chan, David, E-mail: dcsy23@gmail.com [Department of Surgery, University Hospital of Wales, Cardiff (United Kingdom); Blake, Paul, E-mail: pblake76@yahoo.co.uk [Department of Surgery, University Hospital of Wales, Cardiff (United Kingdom); Roberts, S. Ashley, E-mail: Ashley.Roberts@wales.nhs.uk [Department of Radiology, University Hospital of Wales, Cardiff (United Kingdom)

    2014-07-15

    Purpose: {sup 18}F-fluorodeoxyglucose ({sup 18}F-FDG) positron emission tomography (PET) combined with computed tomography (PET/CT) is now established as a routine staging investigation of oesophageal cancer (OC). The aim of the study was to determine the prognostic significance of PET/CT defined tumour variables including maximum standardised uptake value (SUVmax), tumour length (TL), metastatic length of disease (MLoD), metabolic tumour volume (MTV), total lesion glycolysis (TLG) and total local nodal metastasis count (PET/CT LNMC). Materials and methods: 103 pre-treatment OC patients (76 adenocarcinoma, 25 squamous cell carcinoma, 1 poorly differentiated and 1 neuroendocrine tumour) were staged using PET/CT. The prognostic value of the measured tumour variables were tested using log-rank analysis of the Kaplan–Meier method and Cox's proportional hazards method. Primary outcome measure was survival from diagnosis. Results: Univariate analysis showed all variables to have strong statistical significance in relation to survival. Multivariate analysis demonstrated three variables that were significantly and independently associated with survival; MLoD (HR 1.035, 95% CI 1.008–1.064, p = 0.011), TLG (HR 1.002, 95% CI 1.000–1.003, p = 0.018) and PET/CT LNMC (HR 0.048–0.633, 95% CI 0.005–2.725, p = 0.015). Conclusion: MLoD, TLG, and PET/CT LNMC are important prognostic indicators in OC. This is the first study to demonstrate an independent statistical association between TLG, MLoD and survival by multivariable analysis, and highlights the value of staging OC patients with PET/CT using functional tumour variables.

  5. Defining a 21st Century Air Force (Services) Business Model

    Science.gov (United States)

    2014-05-10

    recognize purchasing habits and preferences of millennials from a marketing perspective in order to develop a relevant services model. Based on... Millennial shopping habits indicate that youthful and future patrons want more on-line and interactive programs. In a recent world-wide survey...conducted by the company, eMarketer, 40 percent of male millennial respondents indicated they would buy everything online if they could. 11 The

  6. Modelling variability in hospital bed occupancy.

    Science.gov (United States)

    Harrison, Gary W; Shafer, Andrea; Mackay, Mark

    2005-11-01

    A stochastic version of the Harrison-Millard multistage model of the flow of patients through a hospital division is developed in order to model correctly not only the average but also the variability in occupancy levels, since it is the variability that makes planning difficult and high percent occupancy levels increase the risk of frequent overflows. The model is fit to one year of data from the medical division of an acute care hospital in Adelaide, Australia. Admissions can be modeled as a Poisson process with rates varying by day of the week and by season. Methods are developed to use the entire annual occupancy profile to estimate transition rate parameters when admission rates are not constant and to estimate rate parameters that vary by day of the week and by season, which are necessary for the model variability to be as large as in the data. The final model matches well the mean, standard deviation and autocorrelation function of the occupancy data and also six months of data not used to estimate the parameters. Repeated simulations are used to construct percentiles of the daily occupancy distributions and thus identify ranges of normal fluctuations and those that are substantive deviations from the past, and also to investigate the trade-offs between frequency of overflows and the percent occupancy for both fixed and flexible bed allocations. Larger divisions can achieve more efficient occupancy levels than smaller ones with the same frequency of overflows. Seasonal variations are more significant than day-of-the-week variations and variable discharge rates are more significant than variable admission rates in contributing to overflows.

  7. Interpolation of climate variables and temperature modeling

    Science.gov (United States)

    Samanta, Sailesh; Pal, Dilip Kumar; Lohar, Debasish; Pal, Babita

    2012-01-01

    Geographic Information Systems (GIS) and modeling are becoming powerful tools in agricultural research and natural resource management. This study proposes an empirical methodology for modeling and mapping of the monthly and annual air temperature using remote sensing and GIS techniques. The study area is Gangetic West Bengal and its neighborhood in the eastern India, where a number of weather systems occur throughout the year. Gangetic West Bengal is a region of strong heterogeneous surface with several weather disturbances. This paper also examines statistical approaches for interpolating climatic data over large regions, providing different interpolation techniques for climate variables' use in agricultural research. Three interpolation approaches, like inverse distance weighted averaging, thin-plate smoothing splines, and co-kriging are evaluated for 4° × 4° area, covering the eastern part of India. The land use/land cover, soil texture, and digital elevation model are used as the independent variables for temperature modeling. Multiple regression analysis with standard method is used to add dependent variables into regression equation. Prediction of mean temperature for monsoon season is better than winter season. Finally standard deviation errors are evaluated after comparing the predicted temperature and observed temperature of the area. For better improvement, distance from the coastline and seasonal wind pattern are stressed to be included as independent variables.

  8. Modeling Variability in Immunocompetence and Immunoresponsiveness

    NARCIS (Netherlands)

    Ask, B.; Waaij, van der E.H.; Bishop, S.C.

    2008-01-01

    The purposes of this paper were to 1) develop a stochastic model that would reflect observed variation between animals and across ages in immunocompetence and responsiveness; and 2) illustrate consequences of this variability for the statistical power of genotype comparisons and selection. A stochas

  9. Automated EEG monitoring in defining a chronic epilepsy model.

    Science.gov (United States)

    Mascott, C R; Gotman, J; Beaudet, A

    1994-01-01

    There has been a recent surge of interest in chronic animal models of epilepsy. Proper assessment of these models requires documentation of spontaneous seizures by EEG, observation, or both in each individual animal to confirm the presumed epileptic condition. We used the same automatic seizure detection system as that currently used for patients in our institution and many others. Electrodes were implanted in 43 rats before intraamygdalar administration of kainic acid (KA). Animals were monitored intermittently for 3 months. Nine of the rats were protected by anticonvulsants [pentobarbital (PB) and diazepam (DZP)] at the time of KA injection. Between 1 and 3 months after KA injection, spontaneous seizures were detected in 20 of the 34 unprotected animals (59%). Surprisingly, spontaneous seizures were also detected during the same period in 2 of the 9 protected animals that were intended to serve as nonepileptic controls. Although the absence of confirmed spontaneous seizures in the remaining animals cannot exclude their occurrence, it indicates that, if present, they are at least rare. On the other hand, definitive proof of epilepsy is invaluable in the attempt to interpret pathologic data from experimental brains.

  10. Maximum Likelihood Analysis of Nonlinear Structural Equation Models with Dichotomous Variables

    Science.gov (United States)

    Song, Xin-Yuan; Lee, Sik-Yum

    2005-01-01

    In this article, a maximum likelihood approach is developed to analyze structural equation models with dichotomous variables that are common in behavioral, psychological and social research. To assess nonlinear causal effects among the latent variables, the structural equation in the model is defined by a nonlinear function. The basic idea of the…

  11. Model for defining the level of implementation of the management functions in small enterprises

    Directory of Open Access Journals (Sweden)

    Dragan Mišetić

    2001-01-01

    Full Text Available Small enterprises, based on private ownership and entrepreneurial capability, represent, for the majority of the scientific and professional public, the prime movers of economic growth, both in developed market economies and in the economies of countries in transition. At the same time, various studies show that the main reason for the bankruptcy of many small enterprises (more than 90% can be found in weak management, i.e. unacquaintance with management functions (planning, organization, human resources management, leading and control and with the need of implementing those functions in practice. Although it is not easy to define the ingredients of the recipe for success or to define precisely the importance of different elements, and regardless of the fact that many authors think that the management theory for large enterprises is inapplicable for the small ones, we all agree that the owner/manager and his implementation of the management theory has a decisive influence on small enterprises in modern economic circumstances. Therefore, the author of this work is hereby representing the model, which defines the level of implementation of management functions in small enterprises, as well as three systems/levels (danger, risk, progress in which small enterprises may find themselves. After the level of implementation of the management function is identified, it is possible to undertake some corrective actions, which will remove the found failures. While choosing the variables of the model, the author took into consideration specific features of a small enterprise, as well as specific features of its owner/manager.

  12. Pheromones and signature mixtures: defining species-wide signals and variable cues for identity in both invertebrates and vertebrates.

    Science.gov (United States)

    Wyatt, Tristram D

    2010-10-01

    Pheromones have been found in species in almost every part of the animal kingdom, including mammals. Pheromones (a molecule or defined combination of molecules) are species-wide signals which elicit innate responses (though responses can be conditional on development as well as context, experience, and internal state). In contrast, signature mixtures, in invertebrates and vertebrates, are variable subsets of molecules of an animal's chemical profile which are learnt by other animals, allowing them to distinguish individuals or colonies. All signature mixtures, and almost all pheromones, whatever the size of molecules, are detected by olfaction (as defined by receptor families and glomerular processing), in mammals by the main olfactory system or vomeronasal system or both. There is convergence on a glomerular organization of olfaction. The processing of all signature mixtures, and most pheromones, is combinatorial across a number of glomeruli, even for some sex pheromones which appear to have 'labeled lines'. Narrowly specific pheromone receptors are found, but are not a prerequisite for a molecule to be a pheromone. A small minority of pheromones act directly on target tissues (allohormone pheromones) or are detected by non-glomerular chemoreceptors, such as taste. The proposed definitions for pheromone and signature mixture are based on the heuristic value of separating these kinds of chemical information. In contrast to a species-wide pheromone, there is no single signature mixture to find, as signature mixtures are a 'receiver-side' phenomenon and it is the differences in signature mixtures which allow animals to distinguish each other.

  13. Variable impact on mortality of AIDS-defining events diagnosed during combination antiretroviral therapy : not all AIDS-defining conditions are created equal

    NARCIS (Netherlands)

    Mocroft, Amanda; Sterne, Jonathan A C; Egger, Matthias; May, Margaret; Grabar, Sophie; Furrer, Hansjakob; Sabin, Caroline; Fatkenheuer, Gerd; Justice, Amy; Reiss, Peter; d'Arminio Monforte, Antonella; Gill, John; Hogg, Robert; Bonnet, Fabrice; Kitahata, Mari; Staszewski, Schlomo; Casabona, Jordi; Harris, Ross; Saag, Michael; Niesters, Bert

    2009-01-01

    BACKGROUND: The extent to which mortality differs following individual acquired immunodeficiency syndrome (AIDS)-defining events (ADEs) has not been assessed among patients initiating combination antiretroviral therapy. METHODS: We analyzed data from 31,620 patients with no prior ADEs who started co

  14. Simple nonlinear models suggest variable star universality

    CERN Document Server

    Lindner, John F; Kia, Behnam; Hippke, Michael; Learned, John G; Ditto, William L

    2015-01-01

    Dramatically improved data from observatories like the CoRoT and Kepler spacecraft have recently facilitated nonlinear time series analysis and phenomenological modeling of variable stars, including the search for strange (aka fractal) or chaotic dynamics. We recently argued [Lindner et al., Phys. Rev. Lett. 114 (2015) 054101] that the Kepler data includes "golden" stars, whose luminosities vary quasiperiodically with two frequencies nearly in the golden ratio, and whose secondary frequencies exhibit power-law scaling with exponent near -1.5, suggesting strange nonchaotic dynamics and singular spectra. Here we use a series of phenomenological models to make plausible the connection between golden stars and fractal spectra. We thereby suggest that at least some features of variable star dynamics reflect universal nonlinear phenomena common to even simple systems.

  15. Dissecting magnetar variability with Bayesian hierarchical models

    CERN Document Server

    Huppenkothen, D; Hogg, D W; Murray, I; Frean, M; Elenbaas, C; Watts, A L; Levin, Y; van der Horst, A J; Kouveliotou, C

    2015-01-01

    Neutron stars are a prime laboratory for testing physical processes under conditions of strong gravity, high density, and extreme magnetic fields. Among the zoo of neutron star phenomena, magnetars stand out for their bursting behaviour, ranging from extremely bright, rare giant flares to numerous, less energetic recurrent bursts. The exact trigger and emission mechanisms for these bursts are not known; favoured models involve either a crust fracture and subsequent energy release into the magnetosphere, or explosive reconnection of magnetic field lines. In the absence of a predictive model, understanding the physical processes responsible for magnetar burst variability is difficult. Here, we develop an empirical model that decomposes magnetar bursts into a superposition of small spike-like features with a simple functional form, where the number of model components is itself part of the inference problem. The cascades of spikes that we model might be formed by avalanches of reconnection, or crust rupture afte...

  16. Generalized linear models for categorical and continuous limited dependent variables

    CERN Document Server

    Smithson, Michael

    2013-01-01

    Introduction and OverviewThe Nature of Limited Dependent VariablesOverview of GLMsEstimation Methods and Model EvaluationOrganization of This BookDiscrete VariablesBinary VariablesLogistic RegressionThe Binomial GLMEstimation Methods and IssuesAnalyses in R and StataExercisesNominal Polytomous VariablesMultinomial Logit ModelConditional Logit and Choice ModelsMultinomial Processing Tree ModelsEstimation Methods and Model EvaluationAnalyses in R and StataExercisesOrdinal Categorical VariablesModeling Ordinal Variables: Common Practice versus Best PracticeOrdinal Model AlternativesCumulative Mod

  17. Quantifying Numerical Model Accuracy and Variability

    Science.gov (United States)

    Montoya, L. H.; Lynett, P. J.

    2015-12-01

    The 2011 Tohoku tsunami event has changed the logic on how to evaluate tsunami hazard on coastal communities. Numerical models are a key component for methodologies used to estimate tsunami risk. Model predictions are essential for the development of Tsunami Hazard Assessments (THA). By better understanding model bias and uncertainties and if possible minimizing them, a more accurate and reliable THA will result. In this study we compare runup height, inundation lines and flow velocity field measurements between GeoClaw and the Method Of Splitting Tsunami (MOST) predictions in the Sendai plain. Runup elevation and average inundation distance was in general overpredicted by the models. However, both models agree relatively well with each other when predicting maximum sea surface elevation and maximum flow velocities. Furthermore, to explore the variability and uncertainties in numerical models, MOST is used to compare predictions from 4 different grid resolutions (30m, 20m, 15m and 12m). Our work shows that predictions of particular products (runup and inundation lines) do not require the use of high resolution (less than 30m) Digital Elevation Maps (DEMs). When predicting runup heights and inundation lines, numerical convergence was achieved using the 30m resolution grid. On the contrary, poor convergence was found in the flow velocity predictions, particularly the 1 meter depth maximum flow velocities. Also, runup height measurements and elevations from the DEM were used to estimate model bias. The results provided in this presentation will help understand the uncertainties in model predictions and locate possible sources of errors within a model.

  18. A MAD model for gamma-ray burst variability

    Science.gov (United States)

    Lloyd-Ronning, Nicole M.; Dolence, Joshua C.; Fryer, Christopher L.

    2016-09-01

    We present a model for the temporal variability of long gamma-ray bursts (GRBs) during the prompt phase (the highly variable first 100 s or so), in the context of a magnetically arrested disc (MAD) around a black hole. In this state, sufficient magnetic flux is held on to the black hole such that it stalls the accretion near the inner region of the disc. The system transitions in and out of the MAD state, which we relate to the variable luminosity of the GRB during the prompt phase, with a characteristic time-scale defined by the free-fall time in the region over which the accretion is arrested. We present simple analytic estimates of the relevant energetics and time-scales, and compare them to GRB observations. In particular, we show how this model can reproduce the characteristic one second time-scale that emerges from various analyses of the prompt emission light curve. We also discuss how our model can accommodate the potentially physically important correlation between a burst quiescent time and the duration of its subsequent pulse.

  19. A MAD Model for Gamma-Ray Burst Variability

    CERN Document Server

    ,

    2016-01-01

    We present a model for the temporal variability of long gamma-ray bursts during the prompt phase (the highly variable first 100 seconds or so), in the context of a magnetically arrested disk (MAD) around a black hole. In this state, sufficient magnetic flux is held on to the black hole such that it stalls the accretion near the inner region of the disk. The system transitions in and out of the MAD state, which we relate to the variable luminosity of the GRB during the prompt phase, with a characteristic timescale defined by the free fall time in the region over which the accretion is arrested. We present simple analytic estimates of the relevant energetics and timescales, and compare them to gamma-ray burst observations. In particular, we show how this model can reproduce the characteristic one second time scale that emerges from various analyses of the prompt emission light curve. We also discuss how our model can accommodate the potentially physically important correlation between a burst quiescent time and...

  20. Nonlocal elasticity defined by Eringen's integral model: Introduction of a boundary layer method

    National Research Council Canada - National Science Library

    Abdollahi, R; Boroomand, B

    2014-01-01

    In this paper we consider a nonlocal elasticity theory defined by Eringen's integral model and introduce, for the first time, a boundary layer method by presenting the exponential basis functions (EBFs...

  1. Modeling variability in porescale multiphase flow experiments

    Energy Technology Data Exchange (ETDEWEB)

    Ling, Bowen; Bao, Jie; Oostrom, Mart; Battiato, Ilenia; Tartakovsky, Alexandre M.

    2017-07-01

    Microfluidic devices and porescale numerical models are commonly used to study multiphase flow in biological, geological, and engineered porous materials. In this work, we perform a set of drainage and imbibition experiments in six identical microfluidic cells to study the reproducibility of multiphase flow experiments. We observe significant variations in the experimental results, which are smaller during the drainage stage and larger during the imbibition stage. We demonstrate that these variations are due to sub-porescale geometry differences in microcells (because of manufacturing defects) and variations in the boundary condition (i.e.,fluctuations in the injection rate inherent to syringe pumps). Computational simulations are conducted using commercial software STAR-CCM+, both with constant and randomly varying injection rate. Stochastic simulations are able to capture variability in the experiments associated with the varying pump injection rate.

  2. Modeling variability in porescale multiphase flow experiments

    Science.gov (United States)

    Ling, Bowen; Bao, Jie; Oostrom, Mart; Battiato, Ilenia; Tartakovsky, Alexandre M.

    2017-07-01

    Microfluidic devices and porescale numerical models are commonly used to study multiphase flow in biological, geological, and engineered porous materials. In this work, we perform a set of drainage and imbibition experiments in six identical microfluidic cells to study the reproducibility of multiphase flow experiments. We observe significant variations in the experimental results, which are smaller during the drainage stage and larger during the imbibition stage. We demonstrate that these variations are due to sub-porescale geometry differences in microcells (because of manufacturing defects) and variations in the boundary condition (i.e., fluctuations in the injection rate inherent to syringe pumps). Computational simulations are conducted using commercial software STAR-CCM+, both with constant and randomly varying injection rates. Stochastic simulations are able to capture variability in the experiments associated with the varying pump injection rate.

  3. Conservation priorities for Prunus africana defined with the aid of spatial analysis of genetic data and climatic variables.

    Directory of Open Access Journals (Sweden)

    Barbara Vinceti

    Full Text Available Conservation priorities for Prunus africana, a tree species found across Afromontane regions, which is of great commercial interest internationally and of local value for rural communities, were defined with the aid of spatial analyses applied to a set of georeferenced molecular marker data (chloroplast and nuclear microsatellites from 32 populations in 9 African countries. Two approaches for the selection of priority populations for conservation were used, differing in the way they optimize representation of intra-specific diversity of P. africana across a minimum number of populations. The first method (S1 was aimed at maximizing genetic diversity of the conservation units and their distinctiveness with regard to climatic conditions, the second method (S2 at optimizing representativeness of the genetic diversity found throughout the species' range. Populations in East African countries (especially Kenya and Tanzania were found to be of great conservation value, as suggested by previous findings. These populations are complemented by those in Madagascar and Cameroon. The combination of the two methods for prioritization led to the identification of a set of 6 priority populations. The potential distribution of P. africana was then modeled based on a dataset of 1,500 georeferenced observations. This enabled an assessment of whether the priority populations identified are exposed to threats from agricultural expansion and climate change, and whether they are located within the boundaries of protected areas. The range of the species has been affected by past climate change and the modeled distribution of P. africana indicates that the species is likely to be negatively affected in future, with an expected decrease in distribution by 2050. Based on these insights, further research at the regional and national scale is recommended, in order to strengthen P. africana conservation efforts.

  4. Conservation priorities for Prunus africana defined with the aid of spatial analysis of genetic data and climatic variables.

    Science.gov (United States)

    Vinceti, Barbara; Loo, Judy; Gaisberger, Hannes; van Zonneveld, Maarten J; Schueler, Silvio; Konrad, Heino; Kadu, Caroline A C; Geburek, Thomas

    2013-01-01

    Conservation priorities for Prunus africana, a tree species found across Afromontane regions, which is of great commercial interest internationally and of local value for rural communities, were defined with the aid of spatial analyses applied to a set of georeferenced molecular marker data (chloroplast and nuclear microsatellites) from 32 populations in 9 African countries. Two approaches for the selection of priority populations for conservation were used, differing in the way they optimize representation of intra-specific diversity of P. africana across a minimum number of populations. The first method (S1) was aimed at maximizing genetic diversity of the conservation units and their distinctiveness with regard to climatic conditions, the second method (S2) at optimizing representativeness of the genetic diversity found throughout the species' range. Populations in East African countries (especially Kenya and Tanzania) were found to be of great conservation value, as suggested by previous findings. These populations are complemented by those in Madagascar and Cameroon. The combination of the two methods for prioritization led to the identification of a set of 6 priority populations. The potential distribution of P. africana was then modeled based on a dataset of 1,500 georeferenced observations. This enabled an assessment of whether the priority populations identified are exposed to threats from agricultural expansion and climate change, and whether they are located within the boundaries of protected areas. The range of the species has been affected by past climate change and the modeled distribution of P. africana indicates that the species is likely to be negatively affected in future, with an expected decrease in distribution by 2050. Based on these insights, further research at the regional and national scale is recommended, in order to strengthen P. africana conservation efforts.

  5. Latent variable modeling%建立隐性变量模型

    Institute of Scientific and Technical Information of China (English)

    蔡力

    2012-01-01

    @@ A latent variable model, as the name suggests,is a statistical model that contains latent, that is, unobserved, variables.Their roots go back to Spearman's 1904 seminal work[1] on factor analysis,which is arguably the first well-articulated latent variable model to be widely used in psychology, mental health research, and allied disciplines.Because of the association of factor analysis with early studies of human intelligence, the fact that key variables in a statistical model are, on occasion, unobserved has been a point of lingering contention and controversy.The reader is assured, however, that a latent variable,defined in the broadest manner, is no more mysterious than an error term in a normal theory linear regression model or a random effect in a mixed model.

  6. A model of cloud application assignments in software-defined storages

    Science.gov (United States)

    Bolodurina, Irina P.; Parfenov, Denis I.; Polezhaev, Petr N.; Shukhman, Alexander E.

    2017-01-01

    The aim of this study is to analyze the structure and mechanisms of interaction of typical cloud applications and to suggest the approaches to optimize their placement in storage systems. In this paper, we describe a generalized model of cloud applications including the three basic layers: a model of application, a model of service, and a model of resource. The distinctive feature of the model suggested implies analyzing cloud resources from the user point of view and from the point of view of a software-defined infrastructure of the virtual data center (DC). The innovation character of this model is in describing at the same time the application data placements, as well as the state of the virtual environment, taking into account the network topology. The model of software-defined storage has been developed as a submodel within the resource model. This model allows implementing the algorithm for control of cloud application assignments in software-defined storages. Experimental researches returned this algorithm decreases in cloud application response time and performance growth in user request processes. The use of software-defined data storages allows the decrease in the number of physical store devices, which demonstrates the efficiency of our algorithm.

  7. The Approximative Hamiltonian for the Dicce model defined in term one-zone potential

    CERN Document Server

    Rasulova, M Yu

    2002-01-01

    The Approximative Hamiltonian (AHM) for the Dicce model is defined in terms of the one-zone potential. We investigate the Dicce model on the base of Petrins-Belokolos's method. This method offers the following advantages. It makes it possible to simplify the construction of the self-consistent equation and the structure of approximative Hamiltonians. In addition, the AHM allows the exact solution of the self-consistent equation to be found and, thus, the approximative Hamiltonian for the Dicce model to be defined in terms of one-zone potential.

  8. Numerical simulation for SI model with variable-order fractional

    Directory of Open Access Journals (Sweden)

    mohamed mohamed

    2016-04-01

    Full Text Available In this paper numerical studies for the variable-order fractional delay differential equations are presented. Adams-Bashforth-Moulton algorithm has been extended to study this problem, where the derivative is defined in the Caputo variable-order fractional sense. Special attention is given to prove the error estimate of the proposed method. Numerical test examples are presented to demonstrate utility of the method. Chaotic behaviors are observed in variable-order one dimensional delayed systems.

  9. a Semantic Model to Define Indoor Space in Context of Emergency Evacuation

    Science.gov (United States)

    Maheshwari, Nishith; Rajan, K. S.

    2016-06-01

    There have been various ways in which the indoor space of a building has been defined. In most of the cases the models have specific purpose on which they focus such as facility management, visualisation or navigation. The focus of our work is to define semantics of a model which can incorporate different aspects of the space within a building without losing any information provided by the data model. In this paper we have suggested a model which defines indoor space in terms of semantic and syntactic features. Each feature belongs to a particular class and based on the class, has a set of properties associated with it. The purpose is to capture properties like geometry, topology and semantic information like name, function and capacity of the space from a real world data model. The features which define the space are determined using the geometric information and the classes are assigned based on the relationships like connectivity, openings and function of the space. The ontology of the classes of the feature set defined will be discussed in the paper.

  10. Modeling Interconnect Variability Using Efficient Parametric Model Order Reduction

    CERN Document Server

    Li, Peng; Li, Xin; Pileggi, Lawrence T; Nassif, Sani R

    2011-01-01

    Assessing IC manufacturing process fluctuations and their impacts on IC interconnect performance has become unavoidable for modern DSM designs. However, the construction of parametric interconnect models is often hampered by the rapid increase in computational cost and model complexity. In this paper we present an efficient yet accurate parametric model order reduction algorithm for addressing the variability of IC interconnect performance. The efficiency of the approach lies in a novel combination of low-rank matrix approximation and multi-parameter moment matching. The complexity of the proposed parametric model order reduction is as low as that of a standard Krylov subspace method when applied to a nominal system. Under the projection-based framework, our algorithm also preserves the passivity of the resulting parametric models.

  11. Modelling magnetic anomalies of solid and fractal bodies with defined boundaries using the finite cube elements method

    Science.gov (United States)

    Mostafa, Mostafa E.

    2009-04-01

    The finite cube elements method (FCEM) is a numerical tool designed for modelling gravity anomalies and estimating structural index (SI) of solid and fractal bodies with defined boundaries, tilted or in normal position and with variable density contrast. In this work, we apply FCEM to modelling magnetic anomalies and estimating SI of bodies with non-uniform magnetization having variable magnitude and direction. In magnetics as in gravity, FCEM allows us to study the spatial distribution of SI of the modelled bodies on contour maps and profiles. We believe that this will impact the forward and inverse modelling of potential field data, especially Euler deconvolution. As far as the author knows, this is the first time that gravity and magnetic anomalies, as well as SI, of self similar fractal bodies such as Menger sponges and Sierpinsky triangles are calculated using FCEM. The SI patterns derived from different order sponges and triangles are perfectly overlapped. This is true for bodies having variable property distributions (susceptibility or density contrast) under different field conditions (in case of magnetics) regardless of their orientation and depth of burial. We therefore propose SI as a new universal fractal-order-invariant measure which can be used in addition to the fractal dimensions for formulating potential field theory of fractal objects.

  12. User-defined Material Model for Thermo-mechanical Progressive Failure Analysis

    Science.gov (United States)

    Knight, Norman F., Jr.

    2008-01-01

    Previously a user-defined material model for orthotropic bimodulus materials was developed for linear and nonlinear stress analysis of composite structures using either shell or solid finite elements within a nonlinear finite element analysis tool. Extensions of this user-defined material model to thermo-mechanical progressive failure analysis are described, and the required input data are documented. The extensions include providing for temperature-dependent material properties, archival of the elastic strains, and a thermal strain calculation for materials exhibiting a stress-free temperature.

  13. Variable Relation Parametric Model on Graphics Modelon for Collaboration Design

    Institute of Scientific and Technical Information of China (English)

    DONG Yu-de; ZHAO Han; LI Yan-feng

    2005-01-01

    A new approach to variable relation parametric model for collaboration design based on the graphic modelon has been put forward. The paper gives a parametric description model of graphic modelon, and relating method for different graphic modelon based on variable constraint. At the same time, with the aim of engineering application in the collaboration design, the autonmous constraint in modelon and relative constraint between two modelons are given. Finally, with the tool of variable and relation dbase, the solving method of variable relating and variable-driven among different graphic modelon in a part, and doubleacting variable relating parametric method among different parts for collaboration are given.

  14. A stepwise approach for defining the applicability domain of SAR and QSAR models

    DEFF Research Database (Denmark)

    Dimitrov, Sabcho; Dimitrova, Gergana; Pavlov, Todor;

    2005-01-01

    parametric requirements are imposed in the first stage, specifying in the domain only those chemicals that fall in the range of variation of the physicochemical properties of the chemicals in the training set. The second stage defines the structural similarity between chemicals that are correctly predicted......A stepwise approach for determining the model applicability domain is proposed. Four stages are applied to account for the diversity and complexity of the current SAR/QSAR models, reflecting their mechanistic rationality (including metabolic activation of chemicals) and transparency. General...... by the model. The structural neighborhood of atom-centered fragments is used to determine this similarity. The third stage in defining the domain is based on a mechanistic understanding of the modeled phenomenon. Here, the model domain combines the reliability of specific reactive groups hypothesized to cause...

  15. Can Geostatistical Models Represent Nature's Variability? An Analysis Using Flume Experiments

    Science.gov (United States)

    Scheidt, C.; Fernandes, A. M.; Paola, C.; Caers, J.

    2015-12-01

    The lack of understanding in the Earth's geological and physical processes governing sediment deposition render subsurface modeling subject to large uncertainty. Geostatistics is often used to model uncertainty because of its capability to stochastically generate spatially varying realizations of the subsurface. These methods can generate a range of realizations of a given pattern - but how representative are these of the full natural variability? And how can we identify the minimum set of images that represent this natural variability? Here we use this minimum set to define the geostatistical prior model: a set of training images that represent the range of patterns generated by autogenic variability in the sedimentary environment under study. The proper definition of the prior model is essential in capturing the variability of the depositional patterns. This work starts with a set of overhead images from an experimental basin that showed ongoing autogenic variability. We use the images to analyze the essential characteristics of this suite of patterns. In particular, our goal is to define a prior model (a minimal set of selected training images) such that geostatistical algorithms, when applied to this set, can reproduce the full measured variability. A necessary prerequisite is to define a measure of variability. In this study, we measure variability using a dissimilarity distance between the images. The distance indicates whether two snapshots contain similar depositional patterns. To reproduce the variability in the images, we apply an MPS algorithm to the set of selected snapshots of the sedimentary basin that serve as training images. The training images are chosen from among the initial set by using the distance measure to ensure that only dissimilar images are chosen. Preliminary investigations show that MPS can reproduce fairly accurately the natural variability of the experimental depositional system. Furthermore, the selected training images provide

  16. Binary outcome variables and logistic regression models

    Institute of Scientific and Technical Information of China (English)

    Xinhua LIU

    2011-01-01

    Biomedical researchers often study binary variables that indicate whether or not a specific event,such as remission of depression symptoms,occurs during the study period.The indicator variable Y takes two values,usually coded as one if the event (remission) is present and zero if the event is not present(non-remission).Let p be the probability that the event occurs ( Y =1),then 1-p will be the probability that the event does not occur ( Y =0).

  17. USING STRUCTURAL EQUATION MODELING TO INVESTIGATE RELATIONSHIPS AMONG ECOLOGICAL VARIABLES

    Science.gov (United States)

    This paper gives an introductory account of Structural Equation Modeling (SEM) and demonstrates its application using LISRELmodel utilizing environmental data. Using nine EMAP data variables, we analyzed their correlation matrix with an SEM model. The model characterized...

  18. 47 CFR 76.1905 - Petitions to modify encoding rules for new services within defined business models.

    Science.gov (United States)

    2010-10-01

    ... services within defined business models. 76.1905 Section 76.1905 Telecommunication FEDERAL COMMUNICATIONS... Rules § 76.1905 Petitions to modify encoding rules for new services within defined business models. (a) The encoding rules for defined business models in § 76.1904 reflect the conventional methods for...

  19. Variability in prostate and seminal vesicle delineations defined on magnetic resonance images, a multi-observer, -center and -sequence study

    DEFF Research Database (Denmark)

    Nyholm, Tufve; Jonsson, Joakim; Söderström, Karin

    2013-01-01

    BACKGROUND: The use of magnetic resonance (MR) imaging as a part of preparation for radiotherapy is increasing. For delineation of the prostate several publications have shown decreased delineation variability using MR compared to computed tomography (CT). The purpose of the present work...

  20. A Data Flow Behavior Constraints Model for Branch Decisionmaking Variables

    Directory of Open Access Journals (Sweden)

    Lu Yan

    2012-06-01

    Full Text Available In order to detect the attacks to decision-making variable, this paper presents a data flow behavior constraint model for branch decision-making variables. Our model is expanded from the common control flow model, itemphasizes on the analysis and verification about the data flow for decision-making variables, so that to ensure the branch statement can execute correctly and can also detect the attack to branch decision-making variableeasily. The constraints of our model include the collection of variables, the statements that the decision-making variables are dependent on and the data flow constraint with the use-def relation of these variables. Our experimental results indicate that it is effective in detecting the attacks to branch decision-making variables as well as the attacks to control-data.

  1. Model of managing pension assets of the defined contribution mandatory state pension system

    Directory of Open Access Journals (Sweden)

    Rudensky Roman A.

    2014-01-01

    Full Text Available Introduction of the defined contribution mandatory state pension system in Ukraine actualises the issue of efficient management of its pension assets. The goals of the article are development of theoretical and methodical provisions and development of the economic and mathematical model of management of pension assets of the defined contribution pension system on the basis of the fuzzy set approach. The necessity to refer to the theory of fuzzy sets when modelling this process is caused by, firstly, absence of rather complete consistent “a priori” information, secondly, uncertainty of flows of insurance contributions and pension payments and, thirdly, significant influence of the instable market environment upon investment processes. The proposed fuzzy set economic and mathematical model of the task of optimal management of pension assets of the defined contribution system is a target function of minimisation of the investment risk as the risk of insufficiency of net pension assets and system of restrictions that reflect restriction of the investment activity with pension assets of the defined contribution pension system, determined by the current pension legislation.

  2. Optical Network Models and Their Application to Software-Defined Network Management

    Directory of Open Access Journals (Sweden)

    Thomas Szyrkowiec

    2017-01-01

    Full Text Available Software-defined networking is finding its way into optical networks. Here, it promises a simplification and unification of network management for optical networks allowing automation of operational tasks despite the highly diverse and vendor-specific commercial systems and the complexity and analog nature of optical transmission. Common abstractions and interfaces are a fundamental component for software-defined optical networking. Currently, a number of models for optical networks are available. They all claim to provide open and vendor agnostic management of optical equipment. In this work, we survey and compare the most important models and propose an intent interface for creating virtual topologies which is integrated in the existing model ecosystem.

  3. Numerical implementation of a state variable model for friction

    Energy Technology Data Exchange (ETDEWEB)

    Korzekwa, D.A. [Los Alamos National Lab., NM (United States); Boyce, D.E. [Cornell Univ., Ithaca, NY (United States)

    1995-03-01

    A general state variable model for friction has been incorporated into a finite element code for viscoplasticity. A contact area evolution model is used in a finite element model of a sheet forming friction test. The results show that a state variable model can be used to capture complex friction behavior in metal forming simulations. It is proposed that simulations can play an important role in the analysis of friction experiments and the development of friction models.

  4. Stochastic modeling of interannual variation of hydrologic variables

    Science.gov (United States)

    Dralle, David; Karst, Nathaniel; Müller, Marc; Vico, Giulia; Thompson, Sally E.

    2017-07-01

    Quantifying the interannual variability of hydrologic variables (such as annual flow volumes, and solute or sediment loads) is a central challenge in hydrologic modeling. Annual or seasonal hydrologic variables are themselves the integral of instantaneous variations and can be well approximated as an aggregate sum of the daily variable. Process-based, probabilistic techniques are available to describe the stochastic structure of daily flow, yet estimating interannual variations in the corresponding aggregated variable requires consideration of the autocorrelation structure of the flow time series. Here we present a method based on a probabilistic streamflow description to obtain the interannual variability of flow-derived variables. The results provide insight into the mechanistic genesis of interannual variability of hydrologic processes. Such clarification can assist in the characterization of ecosystem risk and uncertainty in water resources management. We demonstrate two applications, one quantifying seasonal flow variability and the other quantifying net suspended sediment export.

  5. A NUI Based Multiple Perspective Variability Modelling CASE Tool

    OpenAIRE

    Bashroush, Rabih

    2010-01-01

    With current trends towards moving variability from hardware to \\ud software, and given the increasing desire to postpone design decisions as much \\ud as is economically feasible, managing the variability from requirements \\ud elicitation to implementation is becoming a primary business requirement in the \\ud product line engineering process. One of the main challenges in variability \\ud management is the visualization and management of industry size variability \\ud models. In this demonstrat...

  6. Multi-Wheat-Model Ensemble Responses to Interannual Climate Variability

    Science.gov (United States)

    Ruane, Alex C.; Hudson, Nicholas I.; Asseng, Senthold; Camarrano, Davide; Ewert, Frank; Martre, Pierre; Boote, Kenneth J.; Thorburn, Peter J.; Aggarwal, Pramod K.; Angulo, Carlos

    2016-01-01

    We compare 27 wheat models' yield responses to interannual climate variability, analyzed at locations in Argentina, Australia, India, and The Netherlands as part of the Agricultural Model Intercomparison and Improvement Project (AgMIP) Wheat Pilot. Each model simulated 1981e2010 grain yield, and we evaluate results against the interannual variability of growing season temperature, precipitation, and solar radiation. The amount of information used for calibration has only a minor effect on most models' climate response, and even small multi-model ensembles prove beneficial. Wheat model clusters reveal common characteristics of yield response to climate; however models rarely share the same cluster at all four sites indicating substantial independence. Only a weak relationship (R2 0.24) was found between the models' sensitivities to interannual temperature variability and their response to long-termwarming, suggesting that additional processes differentiate climate change impacts from observed climate variability analogs and motivating continuing analysis and model development efforts.

  7. ABOUT PSYCHOLOGICAL VARIABLES IN APPLICATION SCORING MODELS

    Directory of Open Access Journals (Sweden)

    Pablo Rogers

    2015-01-01

    Full Text Available The purpose of this study is to investigate the contribution of psychological variables and scales suggested by Economic Psychology in predicting individuals’ default. Therefore, a sample of 555 individuals completed a self-completion questionnaire, which was composed of psychological variables and scales. By adopting the methodology of the logistic regression, the following psychological and behavioral characteristics were found associated with the group of individuals in default: a negative dimensions related to money (suffering, inequality and conflict; b high scores on the self-efficacy scale, probably indicating a greater degree of optimism and over-confidence; c buyers classified as compulsive; d individuals who consider it necessary to give gifts to children and friends on special dates, even though many people consider this a luxury; e problems of self-control identified by individuals who drink an average of more than four glasses of alcoholic beverage a day.

  8. A Sequence of Relaxations Constraining Hidden Variable Models

    CERN Document Server

    Steeg, Greg Ver

    2011-01-01

    Many widely studied graphical models with latent variables lead to nontrivial constraints on the distribution of the observed variables. Inspired by the Bell inequalities in quantum mechanics, we refer to any linear inequality whose violation rules out some latent variable model as a "hidden variable test" for that model. Our main contribution is to introduce a sequence of relaxations which provides progressively tighter hidden variable tests. We demonstrate applicability to mixtures of sequences of i.i.d. variables, Bell inequalities, and homophily models in social networks. For the last, we demonstrate that our method provides a test that is able to rule out latent homophily as the sole explanation for correlations on a real social network that are known to be due to influence.

  9. Usability Evaluation of Variability Modeling by means of Common Variability Language

    Directory of Open Access Journals (Sweden)

    Jorge Echeverria

    2015-12-01

    Full Text Available Common Variability Language (CVL is a recent proposal for OMG's upcoming Variability Modeling standard. CVL models variability in terms of Model Fragments.  Usability is a widely-recognized quality criterion essential to warranty the successful use of tools that put these ideas in practice. Facing the need of evaluating the usability of CVL modeling tools, this paper presents a Usability Evaluation of CVL applied to a Modeling Tool for firmware code of Induction Hobs. This evaluation addresses the configuration, scoping and visualization facets. The evaluation involved the end users of the tool whom are engineers of our Induction Hob industrial partner. Effectiveness and efficiency results indicate that model configuration in terms of model fragment substitutions is intuitive enough but both scoping and visualization require improved tool support. Results also enabled us to identify a list of usability problems which may contribute to alleviate scoping and visualization issues in CVL.

  10. Basic relations for the period variation models of variable stars

    OpenAIRE

    Mikulášek, Zdeněk; Gráf, Tomáš; Zejda, Miloslav; Zhu, Liying; Qian, Shen-Bang

    2012-01-01

    Models of period variations are basic tools for period analyzes of variable stars. We introduce phase function and instant period and formulate basic relations and equations among them. Some simple period models are also presented.

  11. Fixed transaction costs and modelling limited dependent variables

    NARCIS (Netherlands)

    Hempenius, A.L.

    1994-01-01

    As an alternative to the Tobit model, for vectors of limited dependent variables, I suggest a model, which follows from explicitly using fixed costs, if appropriate of course, in the utility function of the decision-maker.

  12. Methods for Handling Missing Variables in Risk Prediction Models

    NARCIS (Netherlands)

    Held, Ulrike; Kessels, Alfons; Aymerich, Judith Garcia; Basagana, Xavier; ter Riet, Gerben; Moons, Karel G. M.; Puhan, Milo A.

    2016-01-01

    Prediction models should be externally validated before being used in clinical practice. Many published prediction models have never been validated. Uncollected predictor variables in otherwise suitable validation cohorts are the main factor precluding external validation.We used individual patient

  13. The Properties of Model Selection when Retaining Theory Variables

    DEFF Research Database (Denmark)

    Hendry, David F.; Johansen, Søren

    Economic theories are often fitted directly to data to avoid possible model selection biases. We show that embedding a theory model that specifies the correct set of m relevant exogenous variables, x{t}, within the larger set of m+k candidate variables, (x{t},w{t}), then selection over the second...

  14. Coevolution of variability models and related software artifacts

    DEFF Research Database (Denmark)

    Passos, Leonardo; Teixeira, Leopoldo; Dinztner, Nicolas

    2015-01-01

    models coevolve with other artifact types, we study a large and complex real-world variant-rich software system: the Linux kernel. Specifically, we extract variability-coevolution patterns capturing changes in the variability model of the Linux kernel with subsequent changes in Makefiles and C source...

  15. Modeling first impressions from highly variable facial images.

    Science.gov (United States)

    Vernon, Richard J W; Sutherland, Clare A M; Young, Andrew W; Hartley, Tom

    2014-08-12

    First impressions of social traits, such as trustworthiness or dominance, are reliably perceived in faces, and despite their questionable validity they can have considerable real-world consequences. We sought to uncover the information driving such judgments, using an attribute-based approach. Attributes (physical facial features) were objectively measured from feature positions and colors in a database of highly variable "ambient" face photographs, and then used as input for a neural network to model factor dimensions (approachability, youthful-attractiveness, and dominance) thought to underlie social attributions. A linear model based on this approach was able to account for 58% of the variance in raters' impressions of previously unseen faces, and factor-attribute correlations could be used to rank attributes by their importance to each factor. Reversing this process, neural networks were then used to predict facial attributes and corresponding image properties from specific combinations of factor scores. In this way, the factors driving social trait impressions could be visualized as a series of computer-generated cartoon face-like images, depicting how attributes change along each dimension. This study shows that despite enormous variation in ambient images of faces, a substantial proportion of the variance in first impressions can be accounted for through linear changes in objectively defined features.

  16. End-to-end Information Flow Security Model for Software-Defined Networks

    Directory of Open Access Journals (Sweden)

    D. Ju. Chaly

    2015-01-01

    Full Text Available Software-defined networks (SDN are a novel paradigm of networking which became an enabler technology for many modern applications such as network virtualization, policy-based access control and many others. Software can provide flexibility and fast-paced innovations in the networking; however, it has a complex nature. In this connection there is an increasing necessity of means for assuring its correctness and security. Abstract models for SDN can tackle these challenges. This paper addresses to confidentiality and some integrity properties of SDNs. These are critical properties for multi-tenant SDN environments, since the network management software must ensure that no confidential data of one tenant are leaked to other tenants in spite of using the same physical infrastructure. We define a notion of end-to-end security in context of software-defined networks and propose a semantic model where the reasoning is possible about confidentiality, and we can check that confidential information flows do not interfere with non-confidential ones. We show that the model can be extended in order to reason about networks with secure and insecure links which can arise, for example, in wireless environments.The article is published in the authors’ wording.

  17. Inverse limits and statistical properties for chaotic implicitly defined economic models

    CERN Document Server

    Mihailescu, Eugen

    2011-01-01

    In this paper we study the dynamics and ergodic theory of certain economic models which are implicitly defined. We consider 1-dimensional and 2-dimensional overlapping generations models, a cash-in-advance model, heterogeneous markets and a cobweb model with adaptive adjustment. We consider the inverse limit spaces of certain chaotic invariant fractal sets and their metric, ergodic and stability properties. The inverse limits give the set of intertemporal perfect foresight equilibria for the economic problem considered. First we show that the inverse limits of these models are stable under perturbations. We prove that the inverse limits are expansive and have specification property. We then employ utility functions on inverse limits in our case. We give two ways to rank such utility functions. First, when perturbing certain dynamical systems, we rank utility functions in terms of their \\textit{average values} with respect to invariant probability measures on inverse limits, especially with respect to measures...

  18. Quantum Discord, CHSH Inequality and Hidden Variables -- Critical reassessment of hidden-variables models

    CERN Document Server

    Fujikawa, Kazuo

    2013-01-01

    Hidden-variables models are critically reassessed. It is first examined if the quantum discord is classically described by the hidden-variable model of Bell in the Hilbert space with $d=2$. The criterion of vanishing quantum discord is related to the notion of reduction and, surprisingly, the hidden-variable model in $d=2$, which has been believed to be consistent so far, is in fact inconsistent and excluded by the analysis of conditional measurement and reduction. The description of the full contents of quantum discord by the deterministic hidden-variables models is not possible. We also re-examine CHSH inequality. It is shown that the well-known prediction of CHSH inequality $|B|\\leq 2$ for the CHSH operator $B$ introduced by Cirel'son is not unique. This non-uniqueness arises from the failure of linearity condition in the non-contextual hidden-variables model in $d=4$ used by Bell and CHSH, in agreement with Gleason's theorem which excludes $d=4$ non-contextual hidden-variables models. If one imposes the l...

  19. A Spline Regression Model for Latent Variables

    Science.gov (United States)

    Harring, Jeffrey R.

    2014-01-01

    Spline (or piecewise) regression models have been used in the past to account for patterns in observed data that exhibit distinct phases. The changepoint or knot marking the shift from one phase to the other, in many applications, is an unknown parameter to be estimated. As an extension of this framework, this research considers modeling the…

  20. Modelling crowd-bridge dynamic interaction with a discretely defined crowd

    Science.gov (United States)

    Carroll, S. P.; Owen, J. S.; Hussein, M. F. M.

    2012-05-01

    This paper presents a novel method of modelling crowd-bridge interaction using discrete element theory (DET) to model the pedestrian crowd. DET, also known as agent-based modelling, is commonly used in the simulation of pedestrian movement, particularly in cases where building evacuation is critical or potentially problematic. Pedestrians are modelled as individual elements subject to global behavioural rules. In this paper a discrete element crowd model is coupled with a dynamic bridge model in a time-stepping framework. Feedback takes place between both models at each time-step. An additional pedestrian stimulus is introduced that is a function of bridge lateral dynamic behaviour. The pedestrians' relationship with the vibrating bridge as well as the pedestrians around them is thus simulated. The lateral dynamic behaviour of the bridge is modelled as a damped single degree of freedom (SDoF) oscillator. The excitation and mass enhancement of the dynamic system is determined as the sum of individual pedestrian contributions at each time-step. Previous crowd-structure interaction modelling has utilised a continuous hydrodynamic crowd model. Limitations inherent in this modelling approach are identified and results presented that demonstrate the ability of DET to address these limitations. Simulation results demonstrate the model's ability to consider low density traffic flows and inter-subject variability. The emergence of the crowd's velocity-density relationship is also discussed.

  1. Psychosocial and demographic variables associated with consumer intention to purchase sustainably produced foods as defined by the Midwest Food Alliance.

    Science.gov (United States)

    Robinson, Ramona; Smith, Chery

    2002-01-01

    To examine psychosocial and demographic variables associated with consumer intention to purchase sustainably produced foods using an expanded Theory of Planned Behavior. Consumers were approached at the store entrance and asked to complete a self-administered survey. Three metropolitan Minnesota grocery stores. Participants (n = 550) were adults who shopped at the store: the majority were white, female, and highly educated and earned >or= 50,000 dollars/year. Participation rates averaged 62%. The major domain investigated was consumer support for sustainably produced foods. Demographics, beliefs, attitudes, subjective norm, and self-identity and perceived behavioral control were evaluated as predictors of intention to purchase them. Descriptive statistics, independent t tests, one-way analysis of variance, Pearson product moment correlation coefficients, and stepwise multiple regression analyses (P Consumers were supportive of sustainably produced foods but not highly confident in their ability to purchase them. Independent predictors of intention to purchase them included attitudes, beliefs, perceived behavioral control, subjective norm, past buying behavior, and marital status. Beliefs, attitudes, and confidence level may influence intention to purchase sustainably produced foods. Nutrition educators could increase consumers' awareness of sustainably produced foods by understanding their beliefs, attitudes, and confidence levels.

  2. Variability of Delirium Motor Subtype Scale-Defined Delirium Motor Subtypes in Elderly Adults with Hip Fracture: A Longitudinal Study.

    Science.gov (United States)

    Scholtens, Rikie M; van Munster, Barbara C; Adamis, Dimitrios; de Jonghe, Annemarieke; Meagher, David J; de Rooij, Sophia E J A

    2017-02-01

    To examine changes in motor subtype profile in individuals with delirium. Observational, longitudinal study; substudy of a multicenter, randomized controlled trial. Departments of surgery and orthopedics, Academic Medical Center and Tergooi Hospital, the Netherlands. Elderly adults acutely admitted for hip fracture surgery who developed delirium according to the Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition, for 2 days or longer (n = 76, aged 86.4 ± 6.1, 68.4% female). Delirium Motor Subtype Scale (DMSS), Delirium Rating Scale R98 (DRS-R98), comorbidity, and function. Median delirium duration was 3 days (interquartile range 2.0 days). At first assessment, the hyperactive motor subtype was most common (44.7%), followed by hypoactive motor subtype (28.9%), mixed motor subtype (19.7%), and no motor subtype (6.6%). Participants with no motor subtype had lower DRS-R98 scores than those with the other subtypes (P delirium duration or severity were not associated with change in motor subtype. Motor subtype profile was variable in the majority of participants, although changes that occurred were often related to changes from or to no motor subtype, suggesting evolving or resolving delirium. Changes appeared not be associated with demographic or clinical characteristics, suggesting that evidence from cross-sectional studies of motor subtypes could be applied to many individuals with delirium. Further longitudinal studies should be performed to clarify the stability of motor subtypes in different clinical populations. © 2016, Copyright the Authors Journal compilation © 2016, The American Geriatrics Society.

  3. Linear latent variable models: the lava-package

    DEFF Research Database (Denmark)

    Holst, Klaus Kähler; Budtz-Jørgensen, Esben

    2013-01-01

    An R package for specifying and estimating linear latent variable models is presented. The philosophy of the implementation is to separate the model specification from the actual data, which leads to a dynamic and easy way of modeling complex hierarchical structures. Several advanced features...... are implemented including robust standard errors for clustered correlated data, multigroup analyses, non-linear parameter constraints, inference with incomplete data, maximum likelihood estimation with censored and binary observations, and instrumental variable estimators. In addition an extensive simulation...

  4. Modeling, estimation and identification of stochastic systems with latent variables

    OpenAIRE

    Bottegal, Giulio

    2013-01-01

    The main topic of this thesis is the analysis of static and dynamic models in which some variables, although directly influencing the behavior of certain observables, are not accessible to measurements. These models find applications in many branches of science and engineering, such as control systems, communications, natural and biological sciences and econometrics. It is well-known that models with unaccessible - or latent - variables, usually suffer from a lack of uniqueness of representat...

  5. Reduced models of extratropical low-frequency variability

    Science.gov (United States)

    Strounine, Kirill

    Low-frequency variability (LFV) of the atmosphere refers to its behavior on time scales of 10-100 days, longer than the life cycle of a mid-latitude cyclone but shorter than a season. This behavior is still poorly understood and hard to predict. It has been helpful in gaining understanding that might improve prediction to use various simplified models. The present study compares and contrasts various mode reduction strategies that help derive systematically such simplified models of LFV. Three major strategies have been applied to reduce a fairly realistic, high-dimensional, quasi-geostrophic, 3-level (QG3) atmospheric model to lower dimensions: (i) a purely empirical, multi-level regression procedure, which specifies the functional form of the reduced model and finds the model coefficients by multiple polynomial regression; (ii) an empirical-dynamical method, which retains only a few components in the projection of the full QG3 model equations onto a specified basis (the so-called bare truncation), and finds the linear deterministic and additive stochastic corrections empirically; and (iii) a dynamics-based technique, employing the stochastic mode reduction strategy of Majda et al. (2001; MTV). Subject to the assumption of significant time-scale separation in the physical system under consideration, MTV derives the form of the reduced model and finds its coefficients with minimal statistical fitting. The empirical-dynamical and dynamical reduced models were further improved by sequential parameter estimation and benchmarked against multi-level regression models; the extended Kalman filter (EKF) was used for the parameter estimation. In constructing the reduced models, the choice of basis functions is also important. We considered as basis functions a set of empirical orthogonal functions (EOFs). These EOFs were computed using (a) an energy norm; and (b) a potential-enstrophy norm. We also devised a method, using singular value decomposition of the full-model

  6. Defining epidemics in computer simulation models: How do definitions influence conclusions?

    Directory of Open Access Journals (Sweden)

    Carolyn Orbann

    2017-06-01

    Full Text Available Computer models have proven to be useful tools in studying epidemic disease in human populations. Such models are being used by a broader base of researchers, and it has become more important to ensure that descriptions of model construction and data analyses are clear and communicate important features of model structure. Papers describing computer models of infectious disease often lack a clear description of how the data are aggregated and whether or not non-epidemic runs are excluded from analyses. Given that there is no concrete quantitative definition of what constitutes an epidemic within the public health literature, each modeler must decide on a strategy for identifying epidemics during simulation runs. Here, an SEIR model was used to test the effects of how varying the cutoff for considering a run an epidemic changes potential interpretations of simulation outcomes. Varying the cutoff from 0% to 15% of the model population ever infected with the illness generated significant differences in numbers of dead and timing variables. These results are important for those who use models to form public health policy, in which questions of timing or implementation of interventions might be answered using findings from computer simulation models.

  7. REDUCING PROCESS VARIABILITY BY USING DMAIC MODEL: A CASE STUDY IN BANGLADESH

    OpenAIRE

    Ripon Kumar Chakrabortty; Tarun Kumar Biswas; Iraj Ahmed

    2013-01-01

    Now-a-day's many leading manufacturing industry have started to practice Six Sigma and Lean manufacturing concepts to boost up their productivity as well as quality of products. In this paper, the Six Sigma approach has been used to reduce process variability of a food processing industry in Bangladesh. DMAIC (Define,Measure, Analyze, Improve, & Control) model has been used to implement the Six Sigma Philosophy. Five phases of the model have been structured step by step respectively. Differen...

  8. Effect of Flux Adjustments on Temperature Variability in Climate Models

    Energy Technology Data Exchange (ETDEWEB)

    Duffy, P.; Bell, J.; Covey, C.; Sloan, L.

    1999-12-27

    It has been suggested that ''flux adjustments'' in climate models suppress simulated temperature variability. If true, this might invalidate the conclusion that at least some of observed temperature increases since 1860 are anthropogenic, since this conclusion is based in part on estimates of natural temperature variability derived from flux-adjusted models. We assess variability of surface air temperatures in 17 simulations of internal temperature variability submitted to the Coupled Model Intercomparison Project. By comparing variability in flux-adjusted vs. non-flux adjusted simulations, we find no evidence that flux adjustments suppress temperature variability in climate models; other, largely unknown, factors are much more important in determining simulated temperature variability. Therefore the conclusion that at least some of observed temperature increases are anthropogenic cannot be questioned on the grounds that it is based in part on results of flux-adjusted models. Also, reducing or eliminating flux adjustments would probably do little to improve simulations of temperature variability.

  9. Defining pharmacy and its practice: a conceptual model for an international audience

    Directory of Open Access Journals (Sweden)

    Scahill SL

    2017-05-01

    Full Text Available SL Scahill,1 M Atif,2 ZU Babar3,4 1School of Management, Massey Business School, Massey University, Albany, Auckland, New Zealand; 2Pharmacy School, The Islamia University of Bahawalpur, Bahawalpur, Pakistan; 3School of Pharmacy, University of Huddersfield, Huddersfield, England, UK; 4School of Pharmacy, Faculty of Medical and Health Sciences, University of Auckland, Auckland, New Zealand Background: There is much fragmentation and little consensus in the use of descriptors for the different disciplines that make up the pharmacy sector. Globalization, reprofessionalization and the influx of other disciplines means there is a requirement for a greater degree of standardization. This has not been well addressed in the pharmacy practice research and education literature. Objectives: To identify and define the various subdisciplines of the pharmacy sector and integrate them into an internationally relevant conceptual model based on narrative synthesis of the literature. Methods: A literature review was undertaken to understand the fragmentation in dialogue surrounding definitions relating to concepts and practices in the context of the pharmacy sector. From a synthesis of this literature, the need for this model was justified. Key assumptions of the model were identified, and an organic process of development took place with the three authors engaging in a process of sense-making to theorize the model. Results: The model is “fit for purpose” across multiple countries and includes two components making up the umbrella term “pharmaceutical practice”. The first component is the four conceptual dimensions, which outline the disciplines including social and administrative sciences, community pharmacy, clinical pharmacy and pharmaceutical sciences. The second component of the model describes the “acts of practice”: teaching, research and professional advocacy; service and academic enterprise. Conclusions: This model aims to expose issues

  10. A Polynomial Term Structure Model with Macroeconomic Variables

    Directory of Open Access Journals (Sweden)

    José Valentim Vicente

    2007-06-01

    Full Text Available Recently, a myriad of factor models including macroeconomic variables have been proposed to analyze the yield curve. We present an alternative factor model where term structure movements are captured by Legendre polynomials mimicking the statistical factor movements identified by Litterman e Scheinkmam (1991. We estimate the model with Brazilian Foreign Exchange Coupon data, adopting a Kalman filter, under two versions: the first uses only latent factors and the second includes macroeconomic variables. We study its ability to predict out-of-sample term structure movements, when compared to a random walk. We also discuss results on the impulse response function of macroeconomic variables.

  11. Modelling and forecasting electricity price variability

    Energy Technology Data Exchange (ETDEWEB)

    Haugom, Erik

    2012-07-01

    The liberalization of electricity sectors around the world has induced a need for financial electricity markets. This thesis is mainly focused on calculating, modelling, and predicting volatility for financial electricity prices. The four first essays examine the liberalized Nordic electricity market. The purposes in these papers are to describe some stylized properties of high-frequency financial electricity data and to apply models that can explain and predict variation in volatility. The fifth essay examines how information from high-frequency electricity forward contracts can be used in order to improve electricity spot-price volatility predictions. This essay uses data from the Pennsylvania-New Jersey-Maryland wholesale electricity market in the U.S.A. Essay 1 describes some stylized properties of financial high-frequency electricity prices, their returns and volatilities at the Nordic electricity exchange, Nord Pool. The analyses focus on distribution properties, serial correlation, volatility clustering, the influence of extreme events and seasonality in the various measures. The objective of Essay 2 is to calculate, model, and predict realized volatility of financial electricity prices for quarterly and yearly contracts. The total variation is also separated into continuous and jump variation. Various market measures are also included in the models in order potentially to improve volatility predictions. Essay 3 compares day-ahead predictions of Nord Pool financial electricity price volatility obtained from a GARCH approach with those obtained using standard time-series techniques on realized volatility. The performances of a total of eight models (two representing the GARCH family and six representing standard autoregressive models) are compared and evaluated. Essay 4 examines whether predictions of day-ahead and week-ahead volatility can be improved by additionally including volatility and covariance effects from related financial electricity contracts

  12. Coevolution of variability models and related software artifacts

    DEFF Research Database (Denmark)

    Passos, Leonardo; Teixeira, Leopoldo; Dinztner, Nicolas;

    2015-01-01

    to the evolution of different kinds of software artifacts, it is not surprising that industry reports existing tools and solutions ineffective, as they do not handle the complexity found in practice. Attempting to mitigate this overall lack of knowledge and to support tool builders with insights on how variability...... models coevolve with other artifact types, we study a large and complex real-world variant-rich software system: the Linux kernel. Specifically, we extract variability-coevolution patterns capturing changes in the variability model of the Linux kernel with subsequent changes in Makefiles and C source......Variant-rich software systems offer a large degree of customization, allowing users to configure the target system according to their preferences and needs. Facing high degrees of variability, these systems often employ variability models to explicitly capture user-configurable features (e...

  13. Modeling Psychological Attributes in Psychology - An Epistemological Discussion: Network Analysis vs. Latent Variables.

    Science.gov (United States)

    Guyon, Hervé; Falissard, Bruno; Kop, Jean-Luc

    2017-01-01

    Network Analysis is considered as a new method that challenges Latent Variable models in inferring psychological attributes. With Network Analysis, psychological attributes are derived from a complex system of components without the need to call on any latent variables. But the ontological status of psychological attributes is not adequately defined with Network Analysis, because a psychological attribute is both a complex system and a property emerging from this complex system. The aim of this article is to reappraise the legitimacy of latent variable models by engaging in an ontological and epistemological discussion on psychological attributes. Psychological attributes relate to the mental equilibrium of individuals embedded in their social interactions, as robust attractors within complex dynamic processes with emergent properties, distinct from physical entities located in precise areas of the brain. Latent variables thus possess legitimacy, because the emergent properties can be conceptualized and analyzed on the sole basis of their manifestations, without exploring the upstream complex system. However, in opposition with the usual Latent Variable models, this article is in favor of the integration of a dynamic system of manifestations. Latent Variables models and Network Analysis thus appear as complementary approaches. New approaches combining Latent Network Models and Network Residuals are certainly a promising new way to infer psychological attributes, placing psychological attributes in an inter-subjective dynamic approach. Pragmatism-realism appears as the epistemological framework required if we are to use latent variables as representations of psychological attributes.

  14. Fractional Langevin model of gait variability

    Directory of Open Access Journals (Sweden)

    Latka Miroslaw

    2005-08-01

    Full Text Available Abstract The stride interval in healthy human gait fluctuates from step to step in a random manner and scaling of the interstride interval time series motivated previous investigators to conclude that this time series is fractal. Early studies suggested that gait is a monofractal process, but more recent work indicates the time series is weakly multifractal. Herein we present additional evidence for the weakly multifractal nature of gait. We use the stride interval time series obtained from ten healthy adults walking at a normal relaxed pace for approximately fifteen minutes each as our data set. A fractional Langevin equation is constructed to model the underlying motor control system in which the order of the fractional derivative is itself a stochastic quantity. Using this model we find the fractal dimension for each of the ten data sets to be in agreement with earlier analyses. However, with the present model we are able to draw additional conclusions regarding the nature of the control system guiding walking. The analysis presented herein suggests that the observed scaling in interstride interval data may not be due to long-term memory alone, but may, in fact, be due partly to the statistics.

  15. Proneurogenic Ligands Defined by Modeling Developing Cortex Growth Factor Communication Networks.

    Science.gov (United States)

    Yuzwa, Scott A; Yang, Guang; Borrett, Michael J; Clarke, Geoff; Cancino, Gonzalo I; Zahr, Siraj K; Zandstra, Peter W; Kaplan, David R; Miller, Freda D

    2016-09-01

    The neural stem cell decision to self-renew or differentiate is tightly regulated by its microenvironment. Here, we have asked about this microenvironment, focusing on growth factors in the embryonic cortex at a time when it is largely comprised of neural precursor cells (NPCs) and newborn neurons. We show that cortical NPCs secrete factors that promote their maintenance, while cortical neurons secrete factors that promote differentiation. To define factors important for these activities, we used transcriptome profiling to identify ligands produced by NPCs and neurons, cell-surface mass spectrometry to identify receptors on these cells, and computational modeling to integrate these data. The resultant model predicts a complex growth factor environment with multiple autocrine and paracrine interactions. We tested this communication model, focusing on neurogenesis, and identified IFNγ, Neurturin (Nrtn), and glial-derived neurotrophic factor (GDNF) as ligands with unexpected roles in promoting neurogenic differentiation of NPCs in vivo.

  16. EFFICIENT ESTIMATION OF FUNCTIONAL-COEFFICIENT REGRESSION MODELS WITH DIFFERENT SMOOTHING VARIABLES

    Institute of Scientific and Technical Information of China (English)

    Zhang Riquan; Li Guoying

    2008-01-01

    In this article, a procedure for estimating the coefficient functions on the functional-coefficient regression models with different smoothing variables in different co-efficient functions is defined. First step, by the local linear technique and the averaged method, the initial estimates of the coefficient functions are given. Second step, based on the initial estimates, the efficient estimates of the coefficient functions are proposed by a one-step back-fitting procedure. The efficient estimators share the same asymptotic normalities as the local linear estimators for the functional-coefficient models with a single smoothing variable in different functions. Two simulated examples show that the procedure is effective.

  17. Modeling and design of energy efficient variable stiffness actuators

    NARCIS (Netherlands)

    Visser, L.C.; Carloni, Raffaella; Ünal, Ramazan; Stramigioli, Stefano

    In this paper, we provide a port-based mathematical framework for analyzing and modeling variable stiffness actuators. The framework provides important insights in the energy requirements and, therefore, it is an important tool for the design of energy efficient variable stiffness actuators. Based

  18. A model for variability design rationale in SPL

    NARCIS (Netherlands)

    Galvao, I.; van den Broek, P.M.; Aksit, Mehmet

    2010-01-01

    The management of variability in software product lines goes beyond the definition of variations, traceability and configurations. It involves a lot of assumptions about the variability and related models, which are made by the stakeholders all over the product line but almost never handled explicit

  19. Variable Selection in the Partially Linear Errors-in-Variables Models for Longitudinal Data

    Institute of Scientific and Technical Information of China (English)

    Yi-ping YANG; Liu-gen XUE; Wei-hu CHENG

    2012-01-01

    This paper proposes a new approach for variable selection in partially linear errors-in-variables (EV) models for longitudinal data by penalizing appropriate estimating functions.We apply the SCAD penalty to simultaneously select significant variables and estimate unknown parameters.The rate of convergence and the asymptotic normality of the resulting estimators are established.Furthermore,with proper choice of regularization parameters,we show that the proposed estimators perform as well as the oracle procedure.A new algorithm is proposed for solving penalized estimating equation.The asymptotic results are augmented by a simulation study.

  20. Indeterminate values of target variable in development of credit scoring models

    Directory of Open Access Journals (Sweden)

    Martin Řezáč

    2013-01-01

    Full Text Available In the beginning of every modelling procedure, the first question to ask is what we are trying to predict by the model. In credit scoring the most frequent case is modelling of probability of default; however other situations, such as fraud, revolving of the credit or success of collections could be predicted as well. Nevertheless, the first step is always to define the target variable.The target variable is generally an ’output’ of the model. It contains the information on the available data that we want to predict in future data. In credit scoring it is commonly called good/bad definition. In this paper we study the effect of use of indeterminate value of target variable in development of credit scoring models. We explain the basic principles of logistic regression modelling and selection of target variable. Next, the focus is given to introduction of some of the widely used statistics for model assessment. The main part of the paper is devoted to development and assessment of 27 credit scoring models on real credit data, which are built up and assessed according various definitions of target variable. We show that there is a valid reason for some target definitions to include the indeterminate value into the modelling process, as it provided us with convincing results.

  1. Defining environmental flows requirements at regional scale by using meso-scale habitat models and catchments classification

    Science.gov (United States)

    Vezza, Paolo; Comoglio, Claudio; Rosso, Maurizio

    2010-05-01

    The alterations of the natural flow regime and in-stream channel modification due to abstraction from watercourses act on biota through an hydraulic template, which is mediated by channel morphology. Modeling channel hydro-morphology is needed in order to evaluate how much habitat is available for selected fauna under specific environmental conditions, and consequently to assist decision makers in planning options for regulated river management. Meso-scale habitat modeling methods (e.g., MesoHABSIM) offer advantages over the traditional physical habitat evaluation, involving a larger range of habitat variables, allowing longer length of surveyed rivers and enabling understanding of fish behavior at larger spatial scale. In this study we defined a bottom-up method for the ecological discharge evaluation at regional scale, focusing on catchments smaller than 50 km2, most of them located within mountainous areas of Apennines and Alps mountain range in Piedmont (NW Italy). Within the regional study domain we identified 30 representative catchments not affected by water abstractions in order to build up the habitat-flow relationship, to be used as reference when evaluating regulated watercourses or new projects. For each stream we chose a representative reach and obtained fish data by sampling every single functional habitat (i.e. meso-habitat) within the site, keeping separated each area by using nets. The target species were brown trout (Salmo trutta), marble trout (Salmo trutta marmoratus), bullhead (Cottus gobius), chub (Leuciscus cephalus), barbel (Barbus barbus), vairone (Leuciscus souffia) and other rheophilic Cyprinids. The fish habitat suitability criteria was obtained from the observation of habitat use by a selected organism described with a multivariate relationship between habitat characteristics and fish presence. Habitat type, mean slope, cover, biotic choriotop and substrate, stream depth and velocity, water pH, temperature and percentage of dissolved

  2. Acts of Writing: A Compilation of Six Models that Define the Processes of Writing

    Directory of Open Access Journals (Sweden)

    Laurie A. Sharp

    2016-08-01

    Full Text Available Writing is a developmental and flexible process. Using a prescribed process for acts of writing during instruction does not take into account individual differences of writers and generates writing instruction that is narrow, rigid, and inflexible. Preservice teachers receive limited training with theory and pedagogy for writing, which potentially leads to poor pedagogical practices with writing instruction among practicing teachers. The purpose of this article was to provide teacher educators, preservice teachers and practicing teachers of writing with a knowledge base of historical research and models that define and describe processes involved during the acts of writing

  3. Modeling Candle Flame Behavior In Variable Gravity

    Science.gov (United States)

    Alsairafi, A.; Tien, J. S.; Lee, S. T.; Dietrich, D. L.; Ross, H. D.

    2003-01-01

    The burning of a candle, as typical non-propagating diffusion flame, has been used by a number of researchers to study the effects of electric fields on flame, spontaneous flame oscillation and flickering phenomena, and flame extinction. In normal gravity, the heat released from combustion creates buoyant convection that draws oxygen into the flame. The strength of the buoyant flow depends on the gravitational level and it is expected that the flame shape, size and candle burning rate will vary with gravity. Experimentally, there exist studies of candle burning in enhanced gravity (i.e. higher than normal earth gravity, g(sub e)), and in microgravity in drop towers and space-based facilities. There are, however, no reported experimental data on candle burning in partial gravity (g model of the candle flame, buoyant forces were neglected. The treatment of momentum equation was simplified using a potential flow approximation. Although the predicted flame characteristics agreed well with the experimental results, the model cannot be extended to cases with buoyant flows. In addition, because of the use of potential flow, no-slip boundary condition is not satisfied on the wick surface. So there is some uncertainty on the accuracy of the predicted flow field. In the present modeling effort, the full Navier-Stokes momentum equations with body force term is included. This enables us to study the effect of gravity on candle flames (with zero gravity as the limiting case). In addition, we consider radiation effects in more detail by solving the radiation transfer equation. In the previous study, flame radiation is treated as a simple loss term in the energy equation. Emphasis of the present model is on the gas-phase processes. Therefore, the detailed heat and mass transfer phenomena inside the porous wick are not treated. Instead, it is assumed that a thin layer of liquid fuel coated the entire wick surface during the burning process. This is the limiting case that the mass

  4. Multi-wheat-model ensemble responses to interannual climatic variability

    DEFF Research Database (Denmark)

    Ruane, A C; Hudson, N I; Asseng, S

    2016-01-01

    evaluate results against the interannual variability of growing season temperature, precipitation, and solar radiation. The amount of information used for calibration has only a minor effect on most models' climate response, and even small multi-model ensembles prove beneficial. Wheat model clusters reveal...... common characteristics of yield response to climate; however models rarely share the same cluster at all four sites indicating substantial independence. Only a weak relationship (R2 ≤ 0.24) was found between the models' sensitivities to interannual temperature variability and their response to long...

  5. A new approach to model the variability of karstic recharge

    Directory of Open Access Journals (Sweden)

    A. Hartmann

    2012-02-01

    Full Text Available In karst systems, surface near dissolution carbonate rock results in a high spatial and temporal variability of groundwater recharge. To adequately represent the dominating recharge processes in hydrological models is still a challenge, especially in data scare regions. In this study, we developed a recharge model that is based on a perceptual model of the epikarst. It represents epikarst heterogeneity as a set of system property distributions to produce not only a single recharge time series, but a variety of time series representing the spatial recharge variability. We tested the new model with a unique set of spatially distributed flow and tracer observations in a karstic cave at Mt. Carmel, Israel. We transformed the spatial variability into statistical variables and apply an iterative calibration strategy in which more and more data was added to the calibration. Thereby, we could show that the model is only able to produce realistic results when the information about the spatial variability of the observations was included into the model calibration. We could also show that tracer information improves the model performance if data about the variability is not included.

  6. Geometrically nonlinear creeping mathematic models of shells with variable thickness

    Directory of Open Access Journals (Sweden)

    V.M. Zhgoutov

    2012-08-01

    Full Text Available Calculations of strength, stability and vibration of shell structures play an important role in the design of modern devices machines and structures. However, the behavior of thin-walled structures of variable thickness during which geometric nonlinearity, lateral shifts, viscoelasticity (creep of the material, the variability of the profile take place and thermal deformation starts up is not studied enough.In this paper the mathematical deformation models of variable thickness shells (smoothly variable and ribbed shells, experiencing either mechanical load or permanent temperature field and taking into account the geometrical nonlinearity, creeping and transverse shear, were developed. The refined geometrical proportions for geometrically nonlinear and steadiness problems are given.

  7. Boolean Variables in Economic Models Solved by Linear Programming

    Directory of Open Access Journals (Sweden)

    Lixandroiu D.

    2014-12-01

    Full Text Available The article analyses the use of logical variables in economic models solved by linear programming. Focus is given to the presentation of the way logical constraints are obtained and of the definition rules based on predicate logic. Emphasis is also put on the possibility to use logical variables in constructing a linear objective function on intervals. Such functions are encountered when costs or unitary receipts are different on disjunct intervals of production volumes achieved or sold. Other uses of Boolean variables are connected to constraint systems with conditions and the case of a variable which takes values from a finite set of integers.

  8. Estimation in the polynomial errors-in-variables model

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    Estimators are presented for the coefficients of the polynomial errors-in-variables (EV) model when replicated observations are taken at some experimental points. These estimators are shown to be strongly consistent under mild conditions.

  9. Bayesian Network Models for Local Dependence among Observable Outcome Variables

    Science.gov (United States)

    Almond, Russell G.; Mulder, Joris; Hemat, Lisa A.; Yan, Duanli

    2009-01-01

    Bayesian network models offer a large degree of flexibility for modeling dependence among observables (item outcome variables) from the same task, which may be dependent. This article explores four design patterns for modeling locally dependent observations: (a) no context--ignores dependence among observables; (b) compensatory context--introduces…

  10. The relationship between cub and loglinear models with latent variables

    NARCIS (Netherlands)

    Oberski, D. L.; Vermunt, J. K.

    2015-01-01

    The "combination of uniform and shifted binomial"(cub) model is a distribution for ordinal variables that has received considerable recent attention and specialized development. This article notes that the cub model is a special case of the well-known loglinear latent class model, an observation tha

  11. Multi-wheat-model ensemble responses to interannual climate variability

    NARCIS (Netherlands)

    Ruane, Alex C.; Hudson, Nicholas I.; Asseng, Senthold; Camarrano, Davide; Ewert, Frank; Martre, Pierre; Boote, Kenneth J.; Thorburn, Peter J.; Aggarwal, Pramod K.; Angulo, Carlos; Basso, Bruno; Bertuzzi, Patrick; Biernath, Christian; Brisson, Nadine; Challinor, Andrew J.; Doltra, Jordi; Gayler, Sebastian; Goldberg, Richard; Grant, Robert F.; Heng, Lee; Hooker, Josh; Hunt, Leslie A.; Ingwersen, Joachim; Izaurralde, Roberto C.; Kersebaum, Kurt Christian; Kumar, Soora Naresh; Müller, Christoph; Nendel, Claas; O'Leary, Garry; Olesen, Jørgen E.; Osborne, Tom M.; Palosuo, Taru; Priesack, Eckart; Ripoche, Dominique; Rötter, Reimund P.; Semenov, Mikhail A.; Shcherbak, Iurii; Steduto, Pasquale; Stöckle, Claudio O.; Stratonovitch, Pierre; Streck, Thilo; Supit, Iwan; Tao, Fulu; Travasso, Maria; Waha, Katharina; Wallach, Daniel; White, Jeffrey W.; Wolf, Joost

    2016-01-01

    We compare 27 wheat models' yield responses to interannual climate variability, analyzed at locations in Argentina, Australia, India, and The Netherlands as part of the Agricultural Model Intercomparison and Improvement Project (AgMIP) Wheat Pilot. Each model simulated 1981-2010 grain yield, and

  12. Wetting properties of model interphases coated with defined organic functional groups

    Science.gov (United States)

    Woche, Susanne K.; Goebel, Marc-O.; Guggenberger, Georg; Tunega, Daniel; Bachmann, Joerg

    2013-04-01

    Surface properties of soil particles are of particular interest regarding transport of water and sorption of solutes, especially hazardous xenobiotic species. Wetting properties (e.g. determined by contact angle, CA), governed by the functional groups exposed, are crucial to understand sorption processes in water repellent soils as well as for the geometry of water films sustaining microbial processes on the pore scale. Natural soil particle surfaces are characterized by a wide variety of mineralogical and chemical compounds. Their composition is almost impossible to identify in full. Hence, in order to get a better understanding about surface properties, an option is the usage of defined model surfaces, whereas the created surface should be comparable to natural soil interphases. We exposed smooth glass surfaces to different silane compounds, resulting in a coating covalently bound to the surface and exhibiting defined organic functional groups towards the pore space. The wetting properties as evaluated by CA and the surface free energy (SFE), calculated according to the Acid-Base Theory, were found to be a function of the specific functional group. Specifically, the treated surfaces showed a large variation of CA and SFE as function of chain length and polarity of the organic functional group. The study of wetting properties was accompanied by XPS analysis for selective detection of chemical compounds of the interphase. As the reaction mechanism of the coating process is known, the resulting interphase structure can be modeled based on energetic considerations. A next step is to use same coatings for the defined modification of the pore surfaces of porous media to study transport and sorption processes in complex three phase systems.

  13. A Non-Gaussian Spatial Generalized Linear Latent Variable Model

    KAUST Repository

    Irincheeva, Irina

    2012-08-03

    We consider a spatial generalized linear latent variable model with and without normality distributional assumption on the latent variables. When the latent variables are assumed to be multivariate normal, we apply a Laplace approximation. To relax the assumption of marginal normality in favor of a mixture of normals, we construct a multivariate density with Gaussian spatial dependence and given multivariate margins. We use the pairwise likelihood to estimate the corresponding spatial generalized linear latent variable model. The properties of the resulting estimators are explored by simulations. In the analysis of an air pollution data set the proposed methodology uncovers weather conditions to be a more important source of variability than air pollution in explaining all the causes of non-accidental mortality excluding accidents. © 2012 International Biometric Society.

  14. Total Variability Modeling using Source-specific Priors

    DEFF Research Database (Denmark)

    Shepstone, Sven Ewan; Lee, Kong Aik; Li, Haizhou

    2016-01-01

    In total variability modeling, variable length speech utterances are mapped to fixed low-dimensional i-vectors. Central to computing the total variability matrix and i-vector extraction, is the computation of the posterior distribution for a latent variable conditioned on an observed feature...... sequence of an utterance. In both cases the prior for the latent variable is assumed to be non-informative, since for homogeneous datasets there is no gain in generality in using an informative prior. This work shows in the heterogeneous case, that using informative priors for com- puting the posterior......, can lead to favorable results. We focus on modeling the priors using minimum divergence criterion or fac- tor analysis techniques. Tests on the NIST 2008 and 2010 Speaker Recognition Evaluation (SRE) dataset show that our proposed method beats four baselines: For i-vector extraction using an already...

  15. A variable-order fractal derivative model for anomalous diffusion

    Directory of Open Access Journals (Sweden)

    Liu Xiaoting

    2017-01-01

    Full Text Available This paper pays attention to develop a variable-order fractal derivative model for anomalous diffusion. Previous investigations have indicated that the medium structure, fractal dimension or porosity may change with time or space during solute transport processes, results in time or spatial dependent anomalous diffusion phenomena. Hereby, this study makes an attempt to introduce a variable-order fractal derivative diffusion model, in which the index of fractal derivative depends on temporal moment or spatial position, to characterize the above mentioned anomalous diffusion (or transport processes. Compared with other models, the main advantages in description and the physical explanation of new model are explored by numerical simulation. Further discussions on the dissimilitude such as computational efficiency, diffusion behavior and heavy tail phenomena of the new model and variable-order fractional derivative model are also offered.

  16. Linear latent variable models: the lava-package

    DEFF Research Database (Denmark)

    Holst, Klaus Kähler; Budtz-Jørgensen, Esben

    2013-01-01

    An R package for specifying and estimating linear latent variable models is presented. The philosophy of the implementation is to separate the model specification from the actual data, which leads to a dynamic and easy way of modeling complex hierarchical structures. Several advanced features are...... interface covering a broad range of non-linear generalized structural equation models is described. The model and software are demonstrated in data of measurements of the serotonin transporter in the human brain....

  17. Instrumental Variable Bayesian Model Averaging via Conditional Bayes Factors

    OpenAIRE

    Karl, Anna; Lenkoski, Alex

    2012-01-01

    We develop a method to perform model averaging in two-stage linear regression systems subject to endogeneity. Our method extends an existing Gibbs sampler for instrumental variables to incorporate a component of model uncertainty. Direct evaluation of model probabilities is intractable in this setting. We show that by nesting model moves inside the Gibbs sampler, model comparison can be performed via conditional Bayes factors, leading to straightforward calculations. This new Gibbs sampler is...

  18. Modeling and Simulation For A Variable Sprayerrate System

    Science.gov (United States)

    Shi, Yan; Liang, Anbo; Yuan, Haibo; Zhang, Chunmei; Li, Junlong

    Variable spraying technology is an important content and developing direction in current plant protection machinery, which can effectively save pesticide and lighten burden of ecological environment in agriculture according to characteristic of spraying targets and speed of aircraft crew. Paper established mathematic model and delivery function of variable spraying system based on designed hardware of variable spraying machine, making use of PID controlling algorithm to simulate in MATLAB. Simulating result explained that the model can conveniently control gushing amounts and can arrive at satisfied controlling.

  19. Modeling the variability of firing rate of retinal ganglion cells.

    Science.gov (United States)

    Levine, M W

    1992-12-01

    Impulse trains simulating the maintained discharges of retinal ganglion cells were generated by digital realizations of the integrate-and-fire model. If the mean rate were set by a "bias" level added to "noise," the variability of firing would be related to the mean firing rate as an inverse square root law; the maintained discharges of retinal ganglion cells deviate systematically from such a relationship. A more realistic relationship can be obtained if the integrate-and-fire mechanism is "leaky"; with this refinement, the integrate-and-fire model captures the essential features of the data. However, the model shows that the distribution of intervals is insensitive to that of the underlying variability. The leakage time constant, threshold, and distribution of the noise are confounded, rendering the model unspecifiable. Another aspect of variability is presented by the variance of responses to repeated discrete stimuli. The variance of response rate increases with the mean response amplitude; the nature of that relationship depends on the duration of the periods in which the response is sampled. These results have defied explanation. But if it is assumed that variability depends on mean rate in the way observed for maintained discharges, the variability of responses to abrupt changes in lighting can be predicted from the observed mean responses. The parameters that provide the best fits for the variability of responses also provide a reasonable fit to the variability of maintained discharges.

  20. Using Enthalpy as a Prognostic Variable in Atmospheric Modelling with Variable Composition

    Science.gov (United States)

    2016-04-14

    tories, and the equation of state p = ∑ i pi = ∑ i ρiRiT = ρRT . (4) Here Ri = kB/mi are individual gas constants for each species and kB is the...relation between the mass, pressure, and temperature fields via the equation of state (4). The use of virtual temperature in Equation (11) implies that...internal energy equation as a convenient prognostic thermodynamic variable for atmospheric modelling with variable composition, including models of

  1. Defining Building Information Modeling implementation activities based on capability maturity evaluation: a theoretical model

    Directory of Open Access Journals (Sweden)

    Romain Morlhon

    2015-01-01

    Full Text Available Building Information Modeling (BIM has become a widely accepted tool to overcome the many hurdles that currently face the Architecture, Engineering and Construction industries. However, implementing such a system is always complex and the recent introduction of BIM does not allow organizations to build their experience on acknowledged standards and procedures. Moreover, data on implementation projects is still disseminated and fragmentary. The objective of this study is to develop an assistance model for BIM implementation. Solutions that are proposed will help develop BIM that is better integrated and better used, and take into account the different maturity levels of each organization. Indeed, based on Critical Success Factors, concrete activities that help in implementation are identified and can be undertaken according to the previous maturity evaluation of an organization. The result of this research consists of a structured model linking maturity, success factors and actions, which operates on the following principle: once an organization has assessed its BIM maturity, it can identify various weaknesses and find relevant answers in the success factors and the associated actions.

  2. Disentangling Pleiotropy along the Genome using Sparse Latent Variable Models

    DEFF Research Database (Denmark)

    Janss, Luc

    Bayesian models are described that use atent variables to model covariances. These models are flexible, scale up linearly in the number of traits, and allow separating covariance structures in different components at the trait level and at the genomic level. Multi-trait version of the BayesA (MT......-BA) and Bayesian LASSO (MT-BL) are described that model heterogeneous variance and covariance over the genome, and a model that directly models multiple genomic breeding values (MT-MG), representing different genomic covariance structures. The models are demonstrated on a mouse data set to model the genomic...

  3. Modelling of variability of the chemically peculiar star phi Draconis

    CERN Document Server

    Prvák, Milan; Krtička, Jiří; Mikulášek, Zdeněk; Lüftinger, T

    2015-01-01

    Context: The presence of heavier chemical elements in stellar atmospheres influences the spectral energy distribution (SED) of stars. An uneven surface distribution of these elements, together with flux redistribution and stellar rotation, are commonly believed to be the primary causes of the variability of chemically peculiar (CP) stars. Aims: We aim to model the photometric variability of the CP star PHI Dra based on the assumption of inhomogeneous surface distribution of heavier elements and compare it to the observed variability of the star. We also intend to identify the processes that contribute most significantly to its photometric variability. Methods: We use a grid of TLUSTY model atmospheres and the SYNSPEC code to model the radiative flux emerging from the individual surface elements of PHI Dra with different chemical compositions. We integrate the emerging flux over the visible surface of the star at different phases throughout the entire rotational period to synthesise theoretical light curves of...

  4. Model and Variable Selection Procedures for Semiparametric Time Series Regression

    Directory of Open Access Journals (Sweden)

    Risa Kato

    2009-01-01

    Full Text Available Semiparametric regression models are very useful for time series analysis. They facilitate the detection of features resulting from external interventions. The complexity of semiparametric models poses new challenges for issues of nonparametric and parametric inference and model selection that frequently arise from time series data analysis. In this paper, we propose penalized least squares estimators which can simultaneously select significant variables and estimate unknown parameters. An innovative class of variable selection procedure is proposed to select significant variables and basis functions in a semiparametric model. The asymptotic normality of the resulting estimators is established. Information criteria for model selection are also proposed. We illustrate the effectiveness of the proposed procedures with numerical simulations.

  5. Quantitative Analysis of the Security of Software-Defined Network Controller Using Threat/Effort Model

    Directory of Open Access Journals (Sweden)

    Zehui Wu

    2017-01-01

    Full Text Available SDN-based controller, which is responsible for the configuration and management of the network, is the core of Software-Defined Networks. Current methods, which focus on the secure mechanism, use qualitative analysis to estimate the security of controllers, leading to inaccurate results frequently. In this paper, we employ a quantitative approach to overcome the above shortage. Under the analysis of the controller threat model we give the formal model results of the APIs, the protocol interfaces, and the data items of controller and further provide our Threat/Effort quantitative calculation model. With the help of Threat/Effort model, we are able to compare not only the security of different versions of the same kind controller but also different kinds of controllers and provide a basis for controller selection and secure development. We evaluated our approach in four widely used SDN-based controllers which are POX, OpenDaylight, Floodlight, and Ryu. The test, which shows the similarity outcomes with the traditional qualitative analysis, demonstrates that with our approach we are able to get the specific security values of different controllers and presents more accurate results.

  6. Financial applications of a Tabu search variable selection model

    Directory of Open Access Journals (Sweden)

    Zvi Drezner

    2001-01-01

    Full Text Available We illustrate how a comparatively new technique, a Tabu search variable selection model [Drezner, Marcoulides and Salhi (1999], can be applied efficiently within finance when the researcher must select a subset of variables from among the whole set of explanatory variables under consideration. Several types of problems in finance, including corporate and personal bankruptcy prediction, mortgage and credit scoring, and the selection of variables for the Arbitrage Pricing Model, require the researcher to select a subset of variables from a larger set. In order to demonstrate the usefulness of the Tabu search variable selection model, we: (1 illustrate its efficiency in comparison to the main alternative search procedures, such as stepwise regression and the Maximum R2 procedure, and (2 show how a version of the Tabu search procedure may be implemented when attempting to predict corporate bankruptcy. We accomplish (2 by indicating that a Tabu Search procedure increases the predictability of corporate bankruptcy by up to 10 percentage points in comparison to Altman's (1968 Z-Score model.

  7. Quantitative genetics model as the unifying model for defining genomic relationship and inbreeding coefficient.

    Science.gov (United States)

    Wang, Chunkao; Da, Yang

    2014-01-01

    The traditional quantitative genetics model was used as the unifying approach to derive six existing and new definitions of genomic additive and dominance relationships. The theoretical differences of these definitions were in the assumptions of equal SNP effects (equivalent to across-SNP standardization), equal SNP variances (equivalent to within-SNP standardization), and expected or sample SNP additive and dominance variances. The six definitions of genomic additive and dominance relationships on average were consistent with the pedigree relationships, but had individual genomic specificity and large variations not observed from pedigree relationships. These large variations may allow finding least related genomes even within the same family for minimizing genomic relatedness among breeding individuals. The six definitions of genomic relationships generally had similar numerical results in genomic best linear unbiased predictions of additive effects (GBLUP) and similar genomic REML (GREML) estimates of additive heritability. Predicted SNP dominance effects and GREML estimates of dominance heritability were similar within definitions assuming equal SNP effects or within definitions assuming equal SNP variance, but had differences between these two groups of definitions. We proposed a new measure of genomic inbreeding coefficient based on parental genomic co-ancestry coefficient and genomic additive correlation as a genomic approach for predicting offspring inbreeding level. This genomic inbreeding coefficient had the highest correlation with pedigree inbreeding coefficient among the four methods evaluated for calculating genomic inbreeding coefficient in a Holstein sample and a swine sample.

  8. Discriminative variable subsets in Bayesian classification with mixture models, with application in flow cytometry studies.

    Science.gov (United States)

    Lin, Lin; Chan, Cliburn; West, Mike

    2016-01-01

    We discuss the evaluation of subsets of variables for the discriminative evidence they provide in multivariate mixture modeling for classification. The novel development of Bayesian classification analysis presented is partly motivated by problems of design and selection of variables in biomolecular studies, particularly involving widely used assays of large-scale single-cell data generated using flow cytometry technology. For such studies and for mixture modeling generally, we define discriminative analysis that overlays fitted mixture models using a natural measure of concordance between mixture component densities, and define an effective and computationally feasible method for assessing and prioritizing subsets of variables according to their roles in discrimination of one or more mixture components. We relate the new discriminative information measures to Bayesian classification probabilities and error rates, and exemplify their use in Bayesian analysis of Dirichlet process mixture models fitted via Markov chain Monte Carlo methods as well as using a novel Bayesian expectation-maximization algorithm. We present a series of theoretical and simulated data examples to fix concepts and exhibit the utility of the approach, and compare with prior approaches. We demonstrate application in the context of automatic classification and discriminative variable selection in high-throughput systems biology using large flow cytometry datasets.

  9. Variability in a Community-Structured SIS Epidemiological Model.

    Science.gov (United States)

    Hiebeler, David E; Rier, Rachel M; Audibert, Josh; LeClair, Phillip J; Webber, Anna

    2015-04-01

    We study an SIS epidemiological model of a population partitioned into groups referred to as communities, households, or patches. The system is studied using stochastic spatial simulations, as well as a system of ordinary differential equations describing moments of the distribution of infectious individuals. The ODE model explicitly includes the population size, as well as the variability in infection levels among communities and the variability among stochastic realizations of the process. Results are compared with an earlier moment-based model which assumed infinite population size and no variance among realizations of the process. We find that although the amount of localized (as opposed to global) contact in the model has little effect on the equilibrium infection level, it does affect both the timing and magnitude of both types of variability in infection level.

  10. A 3-mode, Variable Velocity Jet Model for HH 34

    Science.gov (United States)

    Raga, A.; Noriega-Crespo, A.

    1998-01-01

    Variable ejection velocity jet models can qualitatively explain the appearance of successive working surfaces in Herbig-Haro (HH) jets. This paper presents an attempt to explore which features of the HH 34 jet can indeed be reproduced by such a model.

  11. Manifest Variable Granger Causality Models for Developmental Research: A Taxonomy

    Science.gov (United States)

    von Eye, Alexander; Wiedermann, Wolfgang

    2015-01-01

    Granger models are popular when it comes to testing hypotheses that relate series of measures causally to each other. In this article, we propose a taxonomy of Granger causality models. The taxonomy results from crossing the four variables Order of Lag, Type of (Contemporaneous) Effect, Direction of Effect, and Segment of Dependent Series…

  12. An Alternative Approach for Nonlinear Latent Variable Models

    Science.gov (United States)

    Mooijaart, Ab; Bentler, Peter M.

    2010-01-01

    In the last decades there has been an increasing interest in nonlinear latent variable models. Since the seminal paper of Kenny and Judd, several methods have been proposed for dealing with these kinds of models. This article introduces an alternative approach. The methodology involves fitting some third-order moments in addition to the means and…

  13. Modeling, analysis and control of a variable geometry actuator

    NARCIS (Netherlands)

    Evers, W.J.; Knaap, A. van der; Besselink, I.J.M.; Nijmeijer, H.

    2008-01-01

    A new design of variable geometry force actuator is presented in this paper. Based upon this design, a model is derived which is used for steady-state analysis, as well as controller design in the presence of friction. The controlled actuator model is finally used to evaluate the power consumption u

  14. A hydrological modeling framework for defining achievable performance standards for pesticides.

    Science.gov (United States)

    Rousseau, Alain N; Lafrance, Pierre; Lavigne, Martin-Pierre; Savary, Stéphane; Konan, Brou; Quilbé, Renaud; Jiapizian, Paul; Amrani, Mohamed

    2012-01-01

    This paper proposes a hydrological modeling framework to define achievable performance standards (APSs) for pesticides that could be attained after implementation of recommended management actions, agricultural practices, and available technologies (i.e., beneficial management practices [BMPs]). An integrated hydrological modeling system, Gestion Intégrée des Bassins versants à l'aide d'un Système Informatisé, was used to quantify APSs for six Canadian watersheds for eight pesticides: atrazine, carbofuran, dicamba, glyphosate, MCPB, MCPA, metolachlor, and 2,4-D. Outputs from simulation runs to predict pesticide concentration under current conditions and in response to implementation of two types of beneficial management practices (reduced pesticide application rate and 1- to 10-m-wide edge-of-field and/or riparian buffer strips, implemented singly or in combination) showed that APS values for scenarios with BMPs were less than those for current conditions. Moreover, APS values at the outlet of watersheds were usually less than ecological thresholds of good condition, when available. Upstream river reaches were at greater risk of having concentrations above a given ecological thresholds because of limited stream flows and overland loads of pesticides. Our integrated approach of "hydrological modeling-APS estimation-ecotoxicological significance" provides the most effective interpretation possible, for management and education purposes, of the potential biological impact of predicted pesticide concentrations in rivers.

  15. Design optimality for models defined by a system of ordinary differential equations.

    Science.gov (United States)

    Rodríguez-Díaz, Juan M; Sánchez-León, Guillermo

    2014-09-01

    Many scientific processes, specially in pharmacokinetics (PK) and pharmacodynamics (PD) studies, are defined by a system of ordinary differential equations (ODE). If there are unknown parameters that need to be estimated, the optimal experimental design approach offers quality estimators for the different objectives of the practitioners. When computing optimal designs the standard procedure uses the linearization of the analytical expression of the ODE solution, which is not feasible when this analytical form does not exist. In this work some methods to solve this problem are described and discussed. Optimal designs for two well-known example models, Iodine and Michaelis-Menten, have been computed using the proposed methods. A thorough study has been done for a specific two-parameter PK model, the biokinetic model of ciprofloxacin and ofloxacin, computing the best designs for different optimality criteria and numbers of points. The designs have been compared according to their efficiency, and the goodness of the designs for the estimation of each parameter has been checked. Although the objectives of the paper are focused on the optimal design field, the methodology can be used as well for a sensitivity analysis of ordinary differential equation systems.

  16. Modelling avalanche danger and understanding snow depth variability

    OpenAIRE

    2010-01-01

    This thesis addresses the causes of avalanche danger at a regional scale. Modelled snow stratigraphy variables were linked to [1] forecasted avalanche danger and [2] observed snowpack stability. Spatial variability of snowpack parameters in a region is an additional important factor that influences the avalanche danger. Snow depth and its change during individual snow fall periods are snowpack parameters which can be measured at a high spatial resolution. Hence, the spatial distribution of sn...

  17. Defining new criteria for selection of cell-based intestinal models using publicly available databases

    Directory of Open Access Journals (Sweden)

    Christensen Jon

    2012-06-01

    Full Text Available Abstract Background The criteria for choosing relevant cell lines among a vast panel of available intestinal-derived lines exhibiting a wide range of functional properties are still ill-defined. The objective of this study was, therefore, to establish objective criteria for choosing relevant cell lines to assess their appropriateness as tumor models as well as for drug absorption studies. Results We made use of publicly available expression signatures and cell based functional assays to delineate differences between various intestinal colon carcinoma cell lines and normal intestinal epithelium. We have compared a panel of intestinal cell lines with patient-derived normal and tumor epithelium and classified them according to traits relating to oncogenic pathway activity, epithelial-mesenchymal transition (EMT and stemness, migratory properties, proliferative activity, transporter expression profiles and chemosensitivity. For example, SW480 represent an EMT-high, migratory phenotype and scored highest in terms of signatures associated to worse overall survival and higher risk of recurrence based on patient derived databases. On the other hand, differentiated HT29 and T84 cells showed gene expression patterns closest to tumor bulk derived cells. Regarding drug absorption, we confirmed that differentiated Caco-2 cells are the model of choice for active uptake studies in the small intestine. Regarding chemosensitivity we were unable to confirm a recently proposed association of chemo-resistance with EMT traits. However, a novel signature was identified through mining of NCI60 GI50 values that allowed to rank the panel of intestinal cell lines according to their drug responsiveness to commonly used chemotherapeutics. Conclusions This study presents a straightforward strategy to exploit publicly available gene expression data to guide the choice of cell-based models. While this approach does not overcome the major limitations of such models

  18. Correlations of control variables for horizontal background error covariance modeling on cubed-sphere grid

    Science.gov (United States)

    Kwun, Jihye; Song, Hyo-Jong; Park, Jong-Im

    2013-04-01

    Background error covariance matrix is very important for variational data assimilation system, determining how the information from observed variables is spread to unobserved variables and spatial points. The full representation of the matrix is impossible because of the huge size so the matrix is constructed implicitly by means of a variable transformation. It is assumed that the forecast errors in the control variables chosen are statistically independent. We used the cubed-sphere geometry based on the spectral element method which is better for parallel application. In cubed-sphere grids, the grid points are located at Gauss-Legendre-Lobatto points on each local element of 6 faces on the sphere. The two stages of the transformation were used in this study. The first is the variable transformation from model to a set of control variables whose errors are assumed to be uncorrelated, which was developed on the cubed sphere-using Galerkin method. Winds are decomposed into rotational part and divergent part by introducing stream function and velocity potential as control variables. The dynamical constraint for balance between mass and wind were made by applying linear balance operator. The second is spectral transformation which is to remove the remaining spatial correlation. The bases for the spectral transform were generated for the cubed-sphere grid. 6-hr difference fields of shallow water equation (SWE) model run initialized by variational data assimilation system were used to obtain forecast error statistics. In the horizontal background error covariance modeling, the regression analysis of the control variables was performed to define the unbalanced variables as the difference between full and correlated part. Regression coefficient was used to remove the remaining correlations between variables.

  19. Internal variability of a 3-D ocean model

    Directory of Open Access Journals (Sweden)

    Bjarne Büchmann

    2016-11-01

    Full Text Available The Defence Centre for Operational Oceanography runs operational forecasts for the Danish waters. The core setup is a 60-layer baroclinic circulation model based on the General Estuarine Transport Model code. At intervals, the model setup is tuned to improve ‘model skill’ and overall performance. It has been an area of concern that the uncertainty inherent to the stochastical/chaotic nature of the model is unknown. Thus, it is difficult to state with certainty that a particular setup is improved, even if the computed model skill increases. This issue also extends to the cases, where the model is tuned during an iterative process, where model results are fed back to improve model parameters, such as bathymetry.An ensemble of identical model setups with slightly perturbed initial conditions is examined. It is found that the initial perturbation causes the models to deviate from each other exponentially fast, causing differences of several PSUs and several kelvin within a few days of simulation. The ensemble is run for a full year, and the long-term variability of salinity and temperature is found for different regions within the modelled area. Further, the developing time scale is estimated for each region, and great regional differences are found – in both variability and time scale. It is observed that periods with very high ensemble variability are typically short-term and spatially limited events.A particular event is examined in detail to shed light on how the ensemble ‘behaves’ in periods with large internal model variability. It is found that the ensemble does not seem to follow any particular stochastic distribution: both the ensemble variability (standard deviation or range as well as the ensemble distribution within that range seem to vary with time and place. Further, it is observed that a large spatial variability due to mesoscale features does not necessarily correlate to large ensemble variability. These findings bear

  20. Analysis models for variables associated with breastfeeding duration.

    Science.gov (United States)

    dos S Neto, Edson Theodoro; Zandonade, Eliana; Emmerich, Adauto Oliveira

    2013-09-01

    OBJECTIVE To analyze the factors associated with breastfeeding duration by two statistical models. METHODS A population-based cohort study was conducted with 86 mothers and newborns from two areas primary covered by the National Health System, with high rates of infant mortality in Vitória, Espírito Santo, Brazil. During 30 months, 67 (78%) children and mothers were visited seven times at home by trained interviewers, who filled out survey forms. Data on food and sucking habits, socioeconomic and maternal characteristics were collected. Variables were analyzed by Cox regression models, considering duration of breastfeeding as the dependent variable, and logistic regression (dependent variables, was the presence of a breastfeeding child in different post-natal ages). RESULTS In the logistic regression model, the pacifier sucking (adjusted Odds Ratio: 3.4; 95%CI 1.2-9.55) and bottle feeding (adjusted Odds Ratio: 4.4; 95%CI 1.6-12.1) increased the chance of weaning a child before one year of age. Variables associated to breastfeeding duration in the Cox regression model were: pacifier sucking (adjusted Hazard Ratio 2.0; 95%CI 1.2-3.3) and bottle feeding (adjusted Hazard Ratio 2.0; 95%CI 1.2-3.5). However, protective factors (maternal age and family income) differed between both models. CONCLUSIONS Risk and protective factors associated with cessation of breastfeeding may be analyzed by different models of statistical regression. Cox Regression Models are adequate to analyze such factors in longitudinal studies.

  1. Cosmological Models with Variable Deceleration Parameter in Lyra's Manifold

    CERN Document Server

    Pradhan, A; Singh, C B

    2006-01-01

    FRW models of the universe have been studied in the cosmological theory based on Lyra's manifold. A new class of exact solutions has been obtained by considering a time dependent displacement field for variable deceleration parameter from which three models of the universe are derived (i) exponential (ii) polynomial and (iii) sinusoidal form respectively. The behaviour of these models of the universe are also discussed. Finally some possibilities of further problems and their investigations have been pointed out.

  2. Defining the true sensitivity of culture for the diagnosis of melioidosis using Bayesian latent class models.

    Directory of Open Access Journals (Sweden)

    Direk Limmathurotsakul

    Full Text Available BACKGROUND: Culture remains the diagnostic gold standard for many bacterial infections, and the method against which other tests are often evaluated. Specificity of culture is 100% if the pathogenic organism is not found in healthy subjects, but the sensitivity of culture is more difficult to determine and may be low. Here, we apply Bayesian latent class models (LCMs to data from patients with a single Gram-negative bacterial infection and define the true sensitivity of culture together with the impact of misclassification by culture on the reported accuracy of alternative diagnostic tests. METHODS/PRINCIPAL FINDINGS: Data from published studies describing the application of five diagnostic tests (culture and four serological tests to a patient cohort with suspected melioidosis were re-analysed using several Bayesian LCMs. Sensitivities, specificities, and positive and negative predictive values (PPVs and NPVs were calculated. Of 320 patients with suspected melioidosis, 119 (37% had culture confirmed melioidosis. Using the final model (Bayesian LCM with conditional dependence between serological tests, the sensitivity of culture was estimated to be 60.2%. Prediction accuracy of the final model was assessed using a classification tool to grade patients according to the likelihood of melioidosis, which indicated that an estimated disease prevalence of 61.6% was credible. Estimates of sensitivities, specificities, PPVs and NPVs of four serological tests were significantly different from previously published values in which culture was used as the gold standard. CONCLUSIONS/SIGNIFICANCE: Culture has low sensitivity and low NPV for the diagnosis of melioidosis and is an imperfect gold standard against which to evaluate alternative tests. Models should be used to support the evaluation of diagnostic tests with an imperfect gold standard. It is likely that the poor sensitivity/specificity of culture is not specific for melioidosis, but rather a generic

  3. Automatic localization of IASLC-defined mediastinal lymph node stations on CT images using fuzzy models

    Science.gov (United States)

    Matsumoto, Monica M. S.; Beig, Niha G.; Udupa, Jayaram K.; Archer, Steven; Torigian, Drew A.

    2014-03-01

    Lung cancer is associated with the highest cancer mortality rates among men and women in the United States. The accurate and precise identification of the lymph node stations on computed tomography (CT) images is important for staging disease and potentially for prognosticating outcome in patients with lung cancer, as well as for pretreatment planning and response assessment purposes. To facilitate a standard means of referring to lymph nodes, the International Association for the Study of Lung Cancer (IASLC) has recently proposed a definition of the different lymph node stations and zones in the thorax. However, nodal station identification is typically performed manually by visual assessment in clinical radiology. This approach leaves room for error due to the subjective and potentially ambiguous nature of visual interpretation, and is labor intensive. We present a method of automatically recognizing the mediastinal IASLC-defined lymph node stations by modifying a hierarchical fuzzy modeling approach previously developed for body-wide automatic anatomy recognition (AAR) in medical imagery. Our AAR-lymph node (AAR-LN) system follows the AAR methodology and consists of two steps. In the first step, the various lymph node stations are manually delineated on a set of CT images following the IASLC definitions. These delineations are then used to build a fuzzy hierarchical model of the nodal stations which are considered as 3D objects. In the second step, the stations are automatically located on any given CT image of the thorax by using the hierarchical fuzzy model and object recognition algorithms. Based on 23 data sets used for model building, 22 independent data sets for testing, and 10 lymph node stations, a mean localization accuracy of within 1-6 voxels has been achieved by the AAR-LN system.

  4. Understanding and forecasting polar stratospheric variability with statistical models

    Directory of Open Access Journals (Sweden)

    C. Blume

    2012-02-01

    Full Text Available The variability of the north-polar stratospheric vortex is a prominent aspect of the middle atmosphere. This work investigates a wide class of statistical models with respect to their ability to model geopotential and temperature anomalies, representing variability in the polar stratosphere. Four partly nonstationary, nonlinear models are assessed: linear discriminant analysis (LDA; a cluster method based on finite elements (FEM-VARX; a neural network, namely a multi-layer perceptron (MLP; and support vector regression (SVR. These methods model time series by incorporating all significant external factors simultaneously, including ENSO, QBO, the solar cycle, volcanoes, etc., to then quantify their statistical importance. We show that variability in reanalysis data from 1980 to 2005 is successfully modeled. FEM-VARX and MLP even satisfactorily forecast the period from 2005 to 2011. However, internal variability remains that cannot be statistically forecasted, such as the unexpected major warming in January 2009. Finally, the statistical model with the best generalization performance is used to predict a vortex breakdown in late January, early February 2012.

  5. De-risking pharmaceutical tablet manufacture through process understanding, latent variable modeling, and optimization technologies.

    Science.gov (United States)

    Muteki, Koji; Swaminathan, Vidya; Sekulic, Sonja S; Reid, George L

    2011-12-01

    In pharmaceutical tablet manufacturing processes, a major source of disturbance affecting drug product quality is the (lot-to-lot) variability of the incoming raw materials. A novel modeling and process optimization strategy that compensates for raw material variability is presented. The approach involves building partial least squares models that combine raw material attributes and tablet process parameters and relate these to final tablet attributes. The resulting models are used in an optimization framework to then find optimal process parameters which can satisfy all the desired requirements for the final tablet attributes, subject to the incoming raw material lots. In order to de-risk the potential (lot-to-lot) variability of raw materials on the drug product quality, the effect of raw material lot variability on the final tablet attributes was investigated using a raw material database containing a large number of lots. In this way, the raw material variability, optimal process parameter space and tablet attributes are correlated with each other and offer the opportunity of simulating a variety of changes in silico without actually performing experiments. The connectivity obtained between the three sources of variability (materials, parameters, attributes) can be considered a design space consistent with Quality by Design principles, which is defined by the ICH-Q8 guidance (USDA 2006). The effectiveness of the methodologies is illustrated through a common industrial tablet manufacturing case study.

  6. Sleeping Beauty mutagenesis in a mouse medulloblastoma model defines networks that discriminate between human molecular subgroups

    Science.gov (United States)

    Genovesi, Laura A.; Ng, Ching Ging; Davis, Melissa J.; Remke, Marc; Taylor, Michael D.; Adams, David J.; Rust, Alistair G.; Ward, Jerrold M.; Ban, Kenneth H.; Jenkins, Nancy A.; Copeland, Neal G.; Wainwright, Brandon J.

    2013-01-01

    The Sleeping Beauty (SB) transposon mutagenesis screen is a powerful tool to facilitate the discovery of cancer genes that drive tumorigenesis in mouse models. In this study, we sought to identify genes that functionally cooperate with sonic hedgehog signaling to initiate medulloblastoma (MB), a tumor of the cerebellum. By combining SB mutagenesis with Patched1 heterozygous mice (Ptch1lacZ/+), we observed an increased frequency of MB and decreased tumor-free survival compared with Ptch1lacZ/+ controls. From an analysis of 85 tumors, we identified 77 common insertion sites that map to 56 genes potentially driving increased tumorigenesis. The common insertion site genes identified in the mutagenesis screen were mapped to human orthologs, which were used to select probes and corresponding expression data from an independent set of previously described human MB samples, and surprisingly were capable of accurately clustering known molecular subgroups of MB, thereby defining common regulatory networks underlying all forms of MB irrespective of subgroup. We performed a network analysis to discover the likely mechanisms of action of subnetworks and used an in vivo model to confirm a role for a highly ranked candidate gene, Nfia, in promoting MB formation. Our analysis implicates candidate cancer genes in the deregulation of apoptosis and translational elongation, and reveals a strong signature of transcriptional regulation that will have broad impact on expression programs in MB. These networks provide functional insights into the complex biology of human MB and identify potential avenues for intervention common to all clinical subgroups. PMID:24167280

  7. A study to define and verify a model of interactive-constructive elementary school science teaching

    Science.gov (United States)

    Henriques, Laura

    This study took place within a four year systemic reform effort collaboratively undertaken by the Science Education Center at the University of Iowa and a local school district. Key features of the inservice project included the use of children's literature as a springboard into inquiry based science investigations, activities to increase parents' involvement in children's science learning and extensive inservice opportunities for elementary teachers to increase content knowledge and content-pedagogical knowledge. The overarching goal of this elementary science teacher enhancement project was to move teachers towards an interactive-constructivist model of teaching and learning. This study had three components. The first was the definition of the prototype teacher indicated by the project's goals and supported by science education research. The second involved the generation of a model to show relationships between teacher-generated products, demographics and their subsequent teaching behaviors. The third involved the verification of the hypothesized model using data collected on 15 original participants. Demographic information, survey responses, interview and written responses to scenarios were among the data collected as source variables. These were scored using a rubric designed to measure constructivist practices in science teaching. Videotapes of science teaching and revised science curricula were collected as downstream variables and scored using an the ESTEEM observational rubric and a rubric developed for the project. Results indicate that newer teachers were more likely to implement features of the project. Those teachers who were philosophically aligned with project goals before project involvement were also more likely to implement features of the project. Other associations between reported beliefs, planning and classroom implementations were not confirmed by these data. Data show that teachers reported higher levels of implementation than their

  8. Building prognostic models for breast cancer patients using clinical variables and hundreds of gene expression signatures

    Directory of Open Access Journals (Sweden)

    Liu Yufeng

    2011-01-01

    Full Text Available Abstract Background Multiple breast cancer gene expression profiles have been developed that appear to provide similar abilities to predict outcome and may outperform clinical-pathologic criteria; however, the extent to which seemingly disparate profiles provide additive prognostic information is not known, nor do we know whether prognostic profiles perform equally across clinically defined breast cancer subtypes. We evaluated whether combining the prognostic powers of standard breast cancer clinical variables with a large set of gene expression signatures could improve on our ability to predict patient outcomes. Methods Using clinical-pathological variables and a collection of 323 gene expression "modules", including 115 previously published signatures, we build multivariate Cox proportional hazards models using a dataset of 550 node-negative systemically untreated breast cancer patients. Models predictive of pathological complete response (pCR to neoadjuvant chemotherapy were also built using this approach. Results We identified statistically significant prognostic models for relapse-free survival (RFS at 7 years for the entire population, and for the subgroups of patients with ER-positive, or Luminal tumors. Furthermore, we found that combined models that included both clinical and genomic parameters improved prognostication compared with models with either clinical or genomic variables alone. Finally, we were able to build statistically significant combined models for pathological complete response (pCR predictions for the entire population. Conclusions Integration of gene expression signatures and clinical-pathological factors is an improved method over either variable type alone. Highly prognostic models could be created when using all patients, and for the subset of patients with lymph node-negative and ER-positive breast cancers. Other variables beyond gene expression and clinical-pathological variables, like gene mutation status or DNA

  9. Estimation of the Heteroskedastic Canonical Contagion Model with Instrumental Variables

    Science.gov (United States)

    2016-01-01

    Knowledge of contagion among economies is a relevant issue in economics. The canonical model of contagion is an alternative in this case. Given the existence of endogenous variables in the model, instrumental variables can be used to decrease the bias of the OLS estimator. In the presence of heteroskedastic disturbances this paper proposes the use of conditional volatilities as instruments. Simulation is used to show that the homoscedastic and heteroskedastic estimators which use them as instruments have small bias. These estimators are preferable in comparison with the OLS estimator and their asymptotic distribution can be used to construct confidence intervals. PMID:28030628

  10. The Properties of Model Selection when Retaining Theory Variables

    DEFF Research Database (Denmark)

    Hendry, David F.; Johansen, Søren

    Economic theories are often fitted directly to data to avoid possible model selection biases. We show that embedding a theory model that specifies the correct set of m relevant exogenous variables, x{t}, within the larger set of m+k candidate variables, (x{t},w{t}), then selection over the second...... set by their statistical significance can be undertaken without affecting the estimator distribution of the theory parameters. This strategy returns the theory-parameter estimates when the theory is correct, yet protects against the theory being under-specified because some w{t} are relevant....

  11. A Variable Precision Covering-Based Rough Set Model Based on Functions

    Directory of Open Access Journals (Sweden)

    Yanqing Zhu

    2014-01-01

    Full Text Available Classical rough set theory is a technique of granular computing for handling the uncertainty, vagueness, and granularity in information systems. Covering-based rough sets are proposed to generalize this theory for dealing with covering data. By introducing a concept of misclassification rate functions, an extended variable precision covering-based rough set model is proposed in this paper. In addition, we define the f-lower and f-upper approximations in terms of neighborhoods in the extended model and study their properties. Particularly, two coverings with the same reductions are proved to generate the same f-lower and f-upper approximations. Finally, we discuss the relationships between the new model and some other variable precision rough set models.

  12. A defining aspect of human resilience in the workplace: a structural modeling approach.

    Science.gov (United States)

    Everly, George S; Davy, Jeanettte A; Smith, Kenneth J; Lating, Jeffrey M; Nucifora, Frederick C

    2011-06-01

    It has been estimated that up to 90% of the US population is exposed to at least 1 traumatic event during their lifetime. Although there is growing evidence that most people are resilient, meaning that they have the ability to adapt to or rebound from adversity, between 5% and 10% of individuals exposed to traumatic events meet criteria for posttraumatic stress disorder. Therefore, identifying the elements of resilience could lead to interventions or training programs designed to enhance resilience. In this article, we test the hypothesis that the effects of stressor conditions on outcomes such as job-related variables may be mediated through the cognitive and affective registrations of those events, conceptualized as subjective stress arousal. The subjects were 491 individuals employed in public accounting, who were sampled from a mailing list provided by the American Institute of Certified Public Accountants. The stressors used in this study were role ambiguity, role conflict, and role overload and the outcome measures were performance, turnover intentions, job satisfaction, and burnout. Stress arousal was measured using a previously developed stress arousal scale. We conducted a series of 2 EQS structural modeling analyses to assess the impact of stress arousal. The first model examined only the direct effects from the role stressors to the outcome constructs. The second model inserted stress arousal as a mediator in the relations between the role stressors and the outcomes. The results of our investigation supported the notion that subjective stress arousal provides greater explanatory clarity by mediating the effects of stressors upon job-related outcome. Including stress arousal in the model provided a much more comprehensive understanding of the relation between stressor and outcomes, and the contribution of role ambiguity and role conflict were better explained. By understanding these relations, anticipatory guidance and crisis intervention programs can be

  13. Modeling pyramidal sensors in ray-tracing software by a suitable user-defined surface

    Science.gov (United States)

    Antichi, Jacopo; Munari, Matteo; Magrin, Demetrio; Riccardi, Armando

    2016-04-01

    Following the unprecedented results in terms of performances delivered by the first light adaptive optics system at the Large Binocular Telescope, there has been a wide-spread and increasing interest on the pyramid wavefront sensor (PWFS), which is the key component, together with the adaptive secondary mirror, of the adaptive optics (AO) module. Currently, there is no straightforward way to model a PWFS in standard sequential ray-tracing software. Common modeling strategies tend to be user-specific and, in general, are unsatisfactory for general applications. To address this problem, we have developed an approach to PWFS modeling based on user-defined surface (UDS), whose properties reside in a specific code written in C language, for the ray-tracing software ZEMAX™. With our approach, the pyramid optical component is implemented as a standard surface in ZEMAX™, exploiting its dynamic link library (DLL) conversion then greatly simplifying ray tracing and analysis. We have utilized the pyramid UDS DLL surface-referred to as pyramidal acronyms may be too risky (PAM2R)-in order to design the current PWFS-based AO system for the Giant Magellan Telescope, evaluating tolerances, with particular attention to the angular sensitivities, by means of sequential ray-tracing tools only, thus verifying PAM2R reliability and robustness. This work indicates that PAM2R makes the design of PWFS as simple as that of other optical standard components. This is particularly suitable with the advent of the extremely large telescopes era for which complexity is definitely one of the main challenges.

  14. Mathematical modeling of variables involved in dissolution testing.

    Science.gov (United States)

    Gao, Zongming

    2011-11-01

    Dissolution testing is an important technique used for development and quality control of solid oral dosage forms of pharmaceutical products. However, the variability associated with this technique, especially with USP apparatuses 1 and 2, is a concern for both the US Food and Drug Administration and pharmaceutical companies. Dissolution testing involves a number of variables, which can be divided into four main categories: (1) analyst, (2) dissolution apparatus, (3) testing environment, and (4) sample. Both linear and nonlinear models have been used to study dissolution profiles, and various mathematical functions have been used to model the observed data. In this study, several variables, including dissolved gases in the dissolution medium, off-center placement of the test tablet, environmental vibration, and various agitation speeds, were modeled. Mathematical models including Higuchi, Korsmeyer-Peppas, Weibull, and the Noyes-Whitney equation were employed to study the dissolution profile of 10 mg prednisone tablets (NCDA #2) using the USP paddle method. The results showed that the nonlinear models (Korsmeyer-Peppas and Weibull) accurately described the entire dissolution profile. The results also showed that dissolution variables affected dissolution rate constants differently, depending on whether the tablets disintegrated or dissolved.

  15. Rose bush leaf and internode expansion dynamics: analysis and development of a model capturing interplant variability

    Directory of Open Access Journals (Sweden)

    Sabine eDemotes-Mainard

    2013-10-01

    Full Text Available Bush rose architecture, among other factors, such as plant health, determines plant visual quality. The commercial product is the individual plant and interplant variability may be high within a crop. Thus, both mean plant architecture and interplant variability should be studied. Expansion is an important feature of architecture, but it has been little studied at the level of individual organs in bush roses. We investigated the expansion kinetics of primary shoot organs, to develop a model reproducing the organ expansion of real crops from non destructive input variables. We took interplant variability in expansion kinetics and the model’s ability to simulate this variability into account. Changes in leaflet and internode dimensions over thermal time were recorded for primary shoot expansion, on 83 plants from three crops grown in different climatic conditions and densities. An empirical model was developed, to reproduce organ expansion kinetics for individual plants of a real crop of bush rose primary shoots. Leaflet or internode length was simulated as a logistic function of thermal time. The model was evaluated by cross-validation. We found that differences in leaflet or internode expansion kinetics between phytomer positions and between plants at a given phytomer position were due mostly to large differences in time of organ expansion and expansion rate, rather than differences in expansion duration. Thus, in the model, the parameters linked to expansion duration were predicted by values common to all plants, whereas variability in final size and organ expansion time was captured by input data. The model accurately simulated leaflet and internode expansion for individual plants (RMSEP = 7.3% and 10.2% of final length, respectively. Thus, this study defines the measurements required to simulate expansion and provides the first model simulating organ expansion in rosebush to capture interplant variability.

  16. Modeling variability and trends in pesticide concentrations in streams

    Science.gov (United States)

    Vecchia, A.V.; Martin, J.D.; Gilliom, R.J.

    2008-01-01

    A parametric regression model was developed for assessing the variability and long-term trends in pesticide concentrations in streams. The dependent variable is the logarithm of pesticide concentration and the explanatory variables are a seasonal wave, which represents the seasonal variability of concentration in response to seasonal application rates; a streamflow anomaly, which is the deviation of concurrent daily streamflow from average conditions for the previous 30 days; and a trend, which represents long-term (inter-annual) changes in concentration. Application of the model to selected herbicides and insecticides in four diverse streams indicated the model is robust with respect to pesticide type, stream location, and the degree of censoring (proportion of nondetections). An automatic model fitting and selection procedure for the seasonal wave and trend components was found to perform well for the datasets analyzed. Artificial censoring scenarios were used in a Monte Carlo simulation analysis to show that the fitted trends were unbiased and the approximate p-values were accurate for as few as 10 uncensored concentrations during a three-year period, assuming a sampling frequency of 15 samples per year. Trend estimates for the full model were compared with a model without the streamflow anomaly and a model in which the seasonality was modeled using standard trigonometric functions, rather than seasonal application rates. Exclusion of the streamflow anomaly resulted in substantial increases in the mean-squared error and decreases in power for detecting trends. Incorrectly modeling the seasonal structure of the concentration data resulted in substantial estimation bias and moderate increases in mean-squared error and decreases in power. ?? 2008 American Water Resources Association.

  17. Modeling heart rate variability including the effect of sleep stages

    Science.gov (United States)

    Soliński, Mateusz; Gierałtowski, Jan; Żebrowski, Jan

    2016-02-01

    We propose a model for heart rate variability (HRV) of a healthy individual during sleep with the assumption that the heart rate variability is predominantly a random process. Autonomic nervous system activity has different properties during different sleep stages, and this affects many physiological systems including the cardiovascular system. Different properties of HRV can be observed during each particular sleep stage. We believe that taking into account the sleep architecture is crucial for modeling the human nighttime HRV. The stochastic model of HRV introduced by Kantelhardt et al. was used as the initial starting point. We studied the statistical properties of sleep in healthy adults, analyzing 30 polysomnographic recordings, which provided realistic information about sleep architecture. Next, we generated synthetic hypnograms and included them in the modeling of nighttime RR interval series. The results of standard HRV linear analysis and of nonlinear analysis (Shannon entropy, Poincaré plots, and multiscale multifractal analysis) show that—in comparison with real data—the HRV signals obtained from our model have very similar properties, in particular including the multifractal characteristics at different time scales. The model described in this paper is discussed in the context of normal sleep. However, its construction is such that it should allow to model heart rate variability in sleep disorders. This possibility is briefly discussed.

  18. Defining the range of pathogens susceptible to Ifitm3 restriction using a knockout mouse model.

    Directory of Open Access Journals (Sweden)

    Aaron R Everitt

    Full Text Available The interferon-inducible transmembrane (IFITM family of proteins has been shown to restrict a broad range of viruses in vitro and in vivo by halting progress through the late endosomal pathway. Further, single nucleotide polymorphisms (SNPs in its sequence have been linked with risk of developing severe influenza virus infections in humans. The number of viruses restricted by this host protein has continued to grow since it was first demonstrated as playing an antiviral role; all of which enter cells via the endosomal pathway. We therefore sought to test the limits of antimicrobial restriction by Ifitm3 using a knockout mouse model. We showed that Ifitm3 does not impact on the restriction or pathogenesis of bacterial (Salmonella typhimurium, Citrobacter rodentium, Mycobacterium tuberculosis or protozoan (Plasmodium berghei pathogens, despite in vitro evidence. However, Ifitm3 is capable of restricting respiratory syncytial virus (RSV in vivo either through directly restricting RSV cell infection, or by exerting a previously uncharacterised function controlling disease pathogenesis. This represents the first demonstration of a virus that enters directly through the plasma membrane, without the need for the endosomal pathway, being restricted by the IFITM family; therefore further defining the role of these antiviral proteins.

  19. Defining Human Tyrosine Kinase Phosphorylation Networks Using Yeast as an In Vivo Model Substrate.

    Science.gov (United States)

    Corwin, Thomas; Woodsmith, Jonathan; Apelt, Federico; Fontaine, Jean-Fred; Meierhofer, David; Helmuth, Johannes; Grossmann, Arndt; Andrade-Navarro, Miguel A; Ballif, Bryan A; Stelzl, Ulrich

    2017-08-23

    Systematic assessment of tyrosine kinase-substrate relationships is fundamental to a better understanding of cellular signaling and its profound alterations in human diseases such as cancer. In human cells, such assessments are confounded by complex signaling networks, feedback loops, conditional activity, and intra-kinase redundancy. Here we address this challenge by exploiting the yeast proteome as an in vivo model substrate. We individually expressed 16 human non-receptor tyrosine kinases (NRTKs) in Saccharomyces cerevisiae and identified 3,279 kinase-substrate relationships involving 1,351 yeast phosphotyrosine (pY) sites. Based on the yeast data without prior information, we generated a set of linear kinase motifs and assigned ∼1,300 known human pY sites to specific NRTKs. Furthermore, experimentally defined pY sites for each individual kinase were shown to cluster within the yeast interactome network irrespective of linear motif information. We therefore applied a network inference approach to predict kinase-substrate relationships for more than 3,500 human proteins, providing a resource to advance our understanding of kinase biology. Copyright © 2017 Elsevier Inc. All rights reserved.

  20. Compound nucleus formation probability PCN defined within the dynamical cluster-decay model

    Science.gov (United States)

    Chopra, Sahila; Kaur, Arshdeep; Gupta, Raj K.

    2015-01-01

    With in the dynamical cluster-decay model (DCM), the compound nucleus fusion/ formation probability PCN is defined for the first time, and its variation with CN excitation energy E* and fissility parameter χ is studied. In DCM, the (total) fusion cross section σfusion is sum of the compound nucleus (CN) and noncompound nucleus (nCN) decay processes, each calculated as the dynamical fragmentation process. The CN cross section σCN is constituted of the evaporation residues (ER) and fusion-fission (ff), including the intermediate mass fragments (IMFs), each calculated for all contributing decay fragments (A1, A2) in terms of their formation and barrier penetration probabilities P0 and P. The nCN cross section σnCN is determined as the quasi-fission (qf) process where P0=1 and P is calculated for the entrance channel nuclei. The calculations are presented for six different target-projectile combinations of CN mass A~100 to superheavy, at various different center-of-mass energies with effects of deformations and orientations of nuclei included in it. Interesting results are that the PCN=1 for complete fusion, but PCN <1 or ≪1 due to the nCN conribution, depending strongly on both E* and χ.

  1. Compound nucleus formation probability PCN defined within the dynamical cluster-decay model

    Directory of Open Access Journals (Sweden)

    Chopra Sahila

    2015-01-01

    Full Text Available With in the dynamical cluster-decay model (DCM, the compound nucleus fusion/ formation probability PCN is defined for the first time, and its variation with CN excitation energy E* and fissility parameter χ is studied. In DCM, the (total fusion cross section σfusion is sum of the compound nucleus (CN and noncompound nucleus (nCN decay processes, each calculated as the dynamical fragmentation process. The CN cross section σCN is constituted of the evaporation residues (ER and fusion-fission (ff, including the intermediate mass fragments (IMFs, each calculated for all contributing decay fragments (A1, A2 in terms of their formation and barrier penetration probabilities P0 and P. The nCN cross section σnCN is determined as the quasi-fission (qf process where P0=1 and P is calculated for the entrance channel nuclei. The calculations are presented for six different target-projectile combinations of CN mass A~100 to superheavy, at various different center-of-mass energies with effects of deformations and orientations of nuclei included in it. Interesting results are that the PCN=1 for complete fusion, but PCN <1 or ≪1 due to the nCN conribution, depending strongly on both E* and χ.

  2. Robust Structural Equation Modeling with Missing Data and Auxiliary Variables

    Science.gov (United States)

    Yuan, Ke-Hai; Zhang, Zhiyong

    2012-01-01

    The paper develops a two-stage robust procedure for structural equation modeling (SEM) and an R package "rsem" to facilitate the use of the procedure by applied researchers. In the first stage, M-estimates of the saturated mean vector and covariance matrix of all variables are obtained. Those corresponding to the substantive variables…

  3. Environmental Concern and Sociodemographic Variables: A Study of Statistical Models

    Science.gov (United States)

    Xiao, Chenyang; McCright, Aaron M.

    2007-01-01

    Studies of the social bases of environmental concern over the past 30 years have produced somewhat inconsistent results regarding the effects of sociodemographic variables, such as gender, income, and place of residence. The authors argue that model specification errors resulting from violation of two statistical assumptions (interval-level…

  4. Multiple Imputation of Predictor Variables Using Generalized Additive Models

    NARCIS (Netherlands)

    de Jong, Roel; van Buuren, Stef; Spiess, Martin

    2016-01-01

    The sensitivity of multiple imputation methods to deviations from their distributional assumptions is investigated using simulations, where the parameters of scientific interest are the coefficients of a linear regression model, and values in predictor variables are missing at random. The performanc

  5. Modeling quasi-static magnetohydrodynamic turbulence with variable energy flux

    CERN Document Server

    Verma, Mahendra K

    2014-01-01

    In quasi-static MHD, experiments and numerical simulations reveal that the energy spectrum is steeper than Kolmogorov's $k^{-5/3}$ spectrum. To explain this observation, we construct turbulence models based on variable energy flux, which is caused by the Joule dissipation. In the first model, which is applicable to small interaction parameters, the energy spectrum is a power law, but with a spectral exponent steeper than -5/3. In the other limit of large interaction parameters, the second model predicts an exponential energy spectrum and flux. The model predictions are in good agreement with the numerical results.

  6. Five-Dimensional Cosmological Model with Variable G and Λ

    Institute of Scientific and Technical Information of China (English)

    H. Baysal; (I). Yilmaz

    2007-01-01

    @@ Einstein's field equations with G and Λ both varying with time are considered in the presence of a perfect fluid for five-dimensional cosmological model in a way which conserves the energy momentum tensor of the matter content. Several sets of explicit solutions in the five-dimensional Kaluza-Klein type cosmological models with variable G and Λ are obtained. The diminishment of extra dimension with the evolution of the universe for the five-dimensional model is exhibited. The physical properties of the models are examined.

  7. Hidden variable models for quantum mechanics can have local parts

    CERN Document Server

    Larsson, Jan-Ake

    2009-01-01

    We present an explicit nonlocal nonsignaling model which has a nontrivial local part and is compatible with quantum mechanics. This model constitutes a counterexample to Colbeck and Renner's statement [Phys. Rev. Lett. 101, 050403 (2008)] that "any hidden variable model can only be compatible with quantum mechanics if its local part is trivial". Furthermore, we examine Colbeck and Renner's definition of "local part" and find that, in the case of models reproducing the quantum predictions for the singlet state, it is a restriction equivalent to the conjunction of nonsignaling and trivial local part.

  8. Stability Analysis of a Variable Meme Transmission Model

    OpenAIRE

    Reem Al-Amoudi; Salma Al-Tuwairqi; Sarah Al-Sheikh

    2014-01-01

    Memes propagation is a usual form of social interaction. Understanding the dynamics of memes transmission enables one to find the conditions that leads to persistence or disappearance of memes. In this paper we analyze qualitatively a mathematical model of variable meme transmission. Two equilibrium points of the model are examined: meme free equilibrium and meme existence equilibrium. The reproduction number R₀ that generates new memes is found. Local and global stability of the equilibrium ...

  9. Variable bit rate video traffic modeling by multiplicative multifractal model

    Institute of Scientific and Technical Information of China (English)

    Huang Xiaodong; Zhou Yuanhua; Zhang Rongfu

    2006-01-01

    Multiplicative multifractal process could well model video traffic. The multiplier distributions in the multiplicative multifractal model for video traffic are investigated and it is found that Gaussian is not suitable for describing the multipliers on the small time scales. A new statistical distribution-symmetric Pareto distribution is introduced. It is applied instead of Gaussian for the multipliers on those scales. Based on that, the algorithm is updated so that symmetric pareto distribution and Gaussian distribution are used to model video traffic but on different time scales. The simulation results demonstrate that the algorithm could model video traffic more accurately.

  10. A metric for attributing variability in modelled streamflows

    Science.gov (United States)

    Shoaib, Syed Abu; Marshall, Lucy; Sharma, Ashish

    2016-10-01

    Significant gaps in our present understanding of hydrological systems lead to enhanced uncertainty in key modelling decisions. This study proposes a method, namely "Quantile Flow Deviation (QFD)", for the attribution of forecast variability to different sources across different streamflow regimes. By using a quantile based metric, we can assess the change in uncertainty across individual percentiles, thereby allowing uncertainty to be expressed as a function of magnitude and time. As a result, one can address selective sources of uncertainty depending on whether low or high flows (say) are of interest. By way of a case study, we demonstrate the usefulness of the approach for estimating the relative importance of model parameter identification, objective functions and model structures as sources of stream flow forecast uncertainty. We use FUSE (Framework for Understanding Structural Errors) to implement our methods, allowing selection of multiple different model structures. Cross-catchment comparison is done for two different catchments: Leaf River in Mississippi, USA and Bass River of Victoria, Australia. Two different approaches to parameter estimation are presented that demonstrate the statistic- one based on GLUE, the other one based on optimization. The results presented in this study suggest that the determination of the model structure with the design catchment should be given priority but that objective function selection with parameter identifiability can lead to significant variability in results. By examining the QFD across multiple flow quantiles, the ability of certain models and optimization routines to constrain variability for different flow conditions is demonstrated.

  11. Sparse modeling of spatial environmental variables associated with asthma.

    Science.gov (United States)

    Chang, Timothy S; Gangnon, Ronald E; David Page, C; Buckingham, William R; Tandias, Aman; Cowan, Kelly J; Tomasallo, Carrie D; Arndt, Brian G; Hanrahan, Lawrence P; Guilbert, Theresa W

    2015-02-01

    Geographically distributed environmental factors influence the burden of diseases such as asthma. Our objective was to identify sparse environmental variables associated with asthma diagnosis gathered from a large electronic health record (EHR) dataset while controlling for spatial variation. An EHR dataset from the University of Wisconsin's Family Medicine, Internal Medicine and Pediatrics Departments was obtained for 199,220 patients aged 5-50years over a three-year period. Each patient's home address was geocoded to one of 3456 geographic census block groups. Over one thousand block group variables were obtained from a commercial database. We developed a Sparse Spatial Environmental Analysis (SASEA). Using this method, the environmental variables were first dimensionally reduced with sparse principal component analysis. Logistic thin plate regression spline modeling was then used to identify block group variables associated with asthma from sparse principal components. The addresses of patients from the EHR dataset were distributed throughout the majority of Wisconsin's geography. Logistic thin plate regression spline modeling captured spatial variation of asthma. Four sparse principal components identified via model selection consisted of food at home, dog ownership, household size, and disposable income variables. In rural areas, dog ownership and renter occupied housing units from significant sparse principal components were associated with asthma. Our main contribution is the incorporation of sparsity in spatial modeling. SASEA sequentially added sparse principal components to Logistic thin plate regression spline modeling. This method allowed association of geographically distributed environmental factors with asthma using EHR and environmental datasets. SASEA can be applied to other diseases with environmental risk factors.

  12. Alternative cokriging model for variable-fidelity surrogate modeling

    DEFF Research Database (Denmark)

    Han, Zhong Hua; Zimmermann, Ralf; Goertz, Stefan

    2012-01-01

    to construct global approximation models of the aerodynamic coefficients as well as the drag polar of an RAE 2822 airfoil. The kriging and cokriging models for the moment coefficient show that the poor space-filling properties of the quasi Monte Carlo sampling of the RANS simulations leaves a noticeable gap...

  13. REDUCING PROCESS VARIABILITY BY USING DMAIC MODEL: A CASE STUDY IN BANGLADESH

    Directory of Open Access Journals (Sweden)

    Ripon Kumar Chakrabortty

    2013-03-01

    Full Text Available Now-a-day's many leading manufacturing industry have started to practice Six Sigma and Lean manufacturing concepts to boost up their productivity as well as quality of products. In this paper, the Six Sigma approach has been used to reduce process variability of a food processing industry in Bangladesh. DMAIC (Define,Measure, Analyze, Improve, & Control model has been used to implement the Six Sigma Philosophy. Five phases of the model have been structured step by step respectively. Different tools of Total Quality Management, Statistical Quality Control and Lean Manufacturing concepts likely Quality function deployment, P Control chart, Fish-bone diagram, Analytical Hierarchy Process, Pareto analysis have been used in different phases of the DMAIC model. The process variability have been tried to reduce by identify the root cause of defects and reducing it. The ultimate goal of this study is to make the process lean and increase the level of sigma.

  14. Spreaders and sponges define metastasis in lung cancer: a Markov chain Monte Carlo mathematical model.

    Science.gov (United States)

    Newton, Paul K; Mason, Jeremy; Bethel, Kelly; Bazhenova, Lyudmila; Nieva, Jorge; Norton, Larry; Kuhn, Peter

    2013-05-01

    The classic view of metastatic cancer progression is that it is a unidirectional process initiated at the primary tumor site, progressing to variably distant metastatic sites in a fairly predictable, although not perfectly understood, fashion. A Markov chain Monte Carlo mathematical approach can determine a pathway diagram that classifies metastatic tumors as "spreaders" or "sponges" and orders the timescales of progression from site to site. In light of recent experimental evidence highlighting the potential significance of self-seeding of primary tumors, we use a Markov chain Monte Carlo (MCMC) approach, based on large autopsy data sets, to quantify the stochastic, systemic, and often multidirectional aspects of cancer progression. We quantify three types of multidirectional mechanisms of progression: (i) self-seeding of the primary tumor, (ii) reseeding of the primary tumor from a metastatic site (primary reseeding), and (iii) reseeding of metastatic tumors (metastasis reseeding). The model shows that the combined characteristics of the primary and the first metastatic site to which it spreads largely determine the future pathways and timescales of systemic disease.

  15. Evaluating two model reduction approaches for large scale hedonic models sensitive to omitted variables and multicollinearity

    DEFF Research Database (Denmark)

    Panduro, Toke Emil; Thorsen, Bo Jellesmark

    2014-01-01

    Hedonic models in environmental valuation studies have grown in terms of number of transactions and number of explanatory variables. We focus on the practical challenge of model reduction, when aiming for reliable parsimonious models, sensitive to omitted variable bias and multicollinearity. We...

  16. Analysis models for variables associated with breastfeeding duration

    Directory of Open Access Journals (Sweden)

    Edson Theodoro dos S. Neto

    2013-09-01

    Full Text Available OBJECTIVE To analyze the factors associated with breastfeeding duration by two statistical models. METHODS A population-based cohort study was conducted with 86 mothers and newborns from two areas primary covered by the National Health System, with high rates of infant mortality in Vitória, Espírito Santo, Brazil. During 30 months, 67 (78% children and mothers were visited seven times at home by trained interviewers, who filled out survey forms. Data on food and sucking habits, socioeconomic and maternal characteristics were collected. Variables were analyzed by Cox regression models, considering duration of breastfeeding as the dependent variable, and logistic regression (dependent variables, was the presence of a breastfeeding child in different post-natal ages. RESULTS In the logistic regression model, the pacifier sucking (adjusted Odds Ratio: 3.4; 95%CI 1.2-9.55 and bottle feeding (adjusted Odds Ratio: 4.4; 95%CI 1.6-12.1 increased the chance of weaning a child before one year of age. Variables associated to breastfeeding duration in the Cox regression model were: pacifier sucking (adjusted Hazard Ratio 2.0; 95%CI 1.2-3.3 and bottle feeding (adjusted Hazard Ratio 2.0; 95%CI 1.2-3.5. However, protective factors (maternal age and family income differed between both models. CONCLUSIONS Risk and protective factors associated with cessation of breastfeeding may be analyzed by different models of statistical regression. Cox Regression Models are adequate to analyze such factors in longitudinal studies.

  17. Quantifying inter- and intra-population niche variability using hierarchical bayesian stable isotope mixing models.

    Science.gov (United States)

    Semmens, Brice X; Ward, Eric J; Moore, Jonathan W; Darimont, Chris T

    2009-07-09

    Variability in resource use defines the width of a trophic niche occupied by a population. Intra-population variability in resource use may occur across hierarchical levels of population structure from individuals to subpopulations. Understanding how levels of population organization contribute to population niche width is critical to ecology and evolution. Here we describe a hierarchical stable isotope mixing model that can simultaneously estimate both the prey composition of a consumer diet and the diet variability among individuals and across levels of population organization. By explicitly estimating variance components for multiple scales, the model can deconstruct the niche width of a consumer population into relevant levels of population structure. We apply this new approach to stable isotope data from a population of gray wolves from coastal British Columbia, and show support for extensive intra-population niche variability among individuals, social groups, and geographically isolated subpopulations. The analytic method we describe improves mixing models by accounting for diet variability, and improves isotope niche width analysis by quantitatively assessing the contribution of levels of organization to the niche width of a population.

  18. Solutions of two-factor models with variable interest rates

    Science.gov (United States)

    Li, Jinglu; Clemons, C. B.; Young, G. W.; Zhu, J.

    2008-12-01

    The focus of this work is on numerical solutions to two-factor option pricing partial differential equations with variable interest rates. Two interest rate models, the Vasicek model and the Cox-Ingersoll-Ross model (CIR), are considered. Emphasis is placed on the definition and implementation of boundary conditions for different portfolio models, and on appropriate truncation of the computational domain. An exact solution to the Vasicek model and an exact solution for the price of bonds convertible to stock at expiration under a stochastic interest rate are derived. The exact solutions are used to evaluate the accuracy of the numerical simulation schemes. For the numerical simulations the pricing solution is analyzed as the market completeness decreases from the ideal complete level to one with higher volatility of the interest rate and a slower mean-reverting environment. Simulations indicate that the CIR model yields more reasonable results than the Vasicek model in a less complete market.

  19. Two-step variable selection in quantile regression models

    Directory of Open Access Journals (Sweden)

    FAN Yali

    2015-06-01

    Full Text Available We propose a two-step variable selection procedure for high dimensional quantile regressions,in which the dimension of the covariates, pn is much larger than the sample size n. In the first step, we perform l1 penalty, and we demonstrate that the first step penalized estimator with the LASSO penalty can reduce the model from an ultra-high dimensional to a model whose size has the same order as that of the true model, and the selected model can cover the true model. The second step excludes the remained irrelevant covariates by applying the adaptive LASSO penalty to the reduced model obtained from the first step. Under some regularity conditions, we show that our procedure enjoys the model selection consistency. We conduct a simulation study and a real data analysis to evaluate the finite sample performance of the proposed approach.

  20. ASYMPTOTICS OF MEAN TRANSFORMATION ESTIMATORS WITH ERRORS IN VARIABLES MODEL

    Institute of Scientific and Technical Information of China (English)

    CUI Hengjian

    2005-01-01

    This paper addresses estimation and its asymptotics of mean transformation θ = E[h(X)] of a random variable X based on n iid. Observations from errors-in-variables model Y = X + v, where v is a measurement error with a known distribution and h(.) is a known smooth function. The asymptotics of deconvolution kernel estimator for ordinary smooth error distribution and expectation extrapolation estimator are given for normal error distribution respectively. Under some mild regularity conditions, the consistency and asymptotically normality are obtained for both type of estimators. Simulations show they have good performance.

  1. Multiple Discrete Endogenous Variables in Weakly-Separable Triangular Models

    Directory of Open Access Journals (Sweden)

    Sung Jae Jun

    2016-02-01

    Full Text Available We consider a model in which an outcome depends on two discrete treatment variables, where one treatment is given before the other. We formulate a three-equation triangular system with weak separability conditions. Without assuming assignment is random, we establish the identification of an average structural function using two-step matching. We also consider decomposing the effect of the first treatment into direct and indirect effects, which are shown to be identified by the proposed methodology. We allow for both of the treatment variables to be non-binary and do not appeal to an identification-at-infinity argument.

  2. A Review of Variable Slicing in Fused Deposition Modeling

    Science.gov (United States)

    Nadiyapara, Hitesh Hirjibhai; Pande, Sarang

    2016-06-01

    The paper presents a literature survey in the field of fused deposition of plastic wires especially in the field of slicing and deposition using extrusion of thermoplastic wires. Various researchers working in the field of computation of deposition path have used their algorithms for variable slicing. In the study, a flowchart has also been proposed for the slicing and deposition process. The algorithm already been developed by previous researcher will be used to be implemented on the fused deposition modelling machine. To demonstrate the capabilities of the fused deposition modeling machine a case study has been taken. It uses a manipulated G-code to be fed to the fused deposition modeling machine. Two types of slicing strategies, namely uniform slicing and variable slicing have been evaluated. In the uniform slicing, the slice thickness has been used for deposition is varying from 0.1 to 0.4 mm. In the variable slicing, thickness has been varied from 0.1 in the polar region to 0.4 in the equatorial region Time required and the number of slices required to deposit a hemisphere of 20 mm diameter have been compared with that using the variable slicing.

  3. THREE-DIMENSIONAL VARIABLES ALLOCATION IN MESOSCALE MODELS

    Institute of Scientific and Technical Information of China (English)

    刘宇迪; 陆汉城

    2004-01-01

    Forecasts and simulations are varied owing to different allocation of 3-dimensional variables in mesoscale models. No attempts have been made to address the issue of optimizing the simulation with a 3-dimensional variables distribution that should come to be. On the basis of linear nonhydrostatic anelastic equations, the paper hereby compares, mainly graphically, the computational dispersion with analytical solutions for four kinds of 3-dimensional meshes commonly found in mesoscale models, in terms of frequency, horizontal and vertical group velocities. The result indicates that the 3-D mesh C/CP has the best computational dispersion, followed by Z/LZ and Z/LY, with the C/L having the worst performance. It is then known that the C/CP mesh is the most desirable allocation in the design of nonhydrostatic baroclinic models. The mesh has, however, larger errors when dealing with shorter horizontal wavelengths. For the simulation of smaller horizontal scales, the horizontal grid intervals have to be shortened to reduce the errors. Additionally, in view of the dominant use of C/CP mesh in finite-difference models, it should be used in conjunction with the Z/LZ or Z/LY mesh if variables are allocated in spectral models.

  4. Nonlinear Dynamical Modeling and Forecast of ENSO Variability

    Science.gov (United States)

    Feigin, Alexander; Mukhin, Dmitry; Gavrilov, Andrey; Seleznev, Aleksey; Loskutov, Evgeny

    2017-04-01

    New methodology of empirical modeling and forecast of nonlinear dynamical system variability [1] is applied to study of ENSO climate system. The methodology is based on two approaches: (i) nonlinear decomposition of data [2], that provides low-dimensional embedding for further modeling, and (ii) construction of empirical model in the form of low dimensional random dynamical ("stochastic") system [3]. Three monthly data sets are used for ENSO modeling and forecast: global sea surface temperature anomalies, troposphere zonal wind speed, and thermocline depth; all data sets are limited by 30 S, 30 N and have horizontal resolution 10x10 . We compare results of optimal data decomposition as well as prognostic skill of the constructed models for different combinations of involved data sets. We also present comparative analysis of ENSO indices forecasts fulfilled by our models and by IRI/CPC ENSO Predictions Plume. [1] A. Gavrilov, D. Mukhin, E. Loskutov, A. Feigin, 2016: Construction of Optimally Reduced Empirical Model by Spatially Distributed Climate Data. 2016 AGU Fall Meeting, Abstract NG31A-1824. [2] D. Mukhin, A. Gavrilov, E. Loskutov , A.Feigin, J.Kurths, 2015: Principal nonlinear dynamical modes of climate variability, Scientific Reports, rep. 5, 15510; doi: 10.1038/srep15510. [3] Ya. Molkov, D. Mukhin, E. Loskutov, A. Feigin, 2012: Random dynamical models from time series. Phys. Rev. E, Vol. 85, n.3.

  5. Animal models of physiologic markers of male reproduction: genetically defined infertile mice

    Energy Technology Data Exchange (ETDEWEB)

    Chubb, C.

    1987-10-01

    The present report focuses on novel animal models of male infertility: genetically defined mice bearing single-gene mutations that induce infertility. The primary goal of the investigations was to identify the reproductive defects in these mutant mice. The phenotypic effects of the gene mutations were deciphered by comparing the mutant mice to their normal siblings. Initially testicular steroidogenesis and spermatogenesis were investigated. The physiologic markers for testicular steroidogenesis were steroid secretion by testes perifused in vitro, seminal vesicle weight, and Leydig cell histology. Spermatogenesis was evaluated by the enumeration of homogenization-resistant sperm/spermatids in testes and by morphometric analyses of germ cells in the seminiferous epithelium. If testicular function appeared normal, the authors investigated the sexual behavior of the mice. The parameters of male sexual behavior that were quantified included mount patency, mount frequency, intromission latency, thrusts per intromission, ejaculation latency, and ejaculation duration. Females of pairs breeding under normal circumstances were monitored for the presence of vaginal plugs and pregnancies. The patency of the ejaculatory process was determined by quantifying sperm in the female reproductive tract after sexual behavior tests. Sperm function was studied by quantitatively determining sperm motility during videomicroscopic observation. Also, the ability of epididymal sperm to function within the uterine environment was analyzed by determining sperm capacity to initiate pregnancy after artificial insemination. Together, the experimental results permitted the grouping of the gene mutations into three general categories. They propose that the same biological markers used in the reported studies can be implemented in the assessment of the impact that environmental toxins may have on male reproduction.

  6. Decadal Variability of Clouds and Comparison with Climate Model Simulations

    Science.gov (United States)

    Su, H.; Shen, T. J.; Jiang, J. H.; Yung, Y. L.

    2014-12-01

    An apparent climate regime shift occurred around 1998/1999, when the steady increase of global-mean surface temperature appeared to hit a hiatus. Coherent decadal variations are found in atmospheric circulation and hydrological cycles. Using 30-year cloud observations from the International Satellite Cloud Climatology Project, we examine the decadal variability of clouds and associated cloud radiative effects on surface warming. Empirical Orthogonal Function analysis is performed. After removing the seasonal cycle and ENSO signal in the 30-year data, we find that the leading EOF modes clearly represent a decadal variability in cloud fraction, well correlated with the indices of Pacific Decadal Oscillation (PDO) and Atlantic Multidecadal Oscillation (AMO). The cloud radiative effects associated with decadal variations of clouds suggest a positive cloud feedback, which would reinforce the global warming hiatus by a net cloud cooling after 1998/1999. Climate model simulations driven by observed sea surface temperature are compared with satellite observed cloud decadal variability. Copyright:

  7. Variable Star Signature Classification using Slotted Symbolic Markov Modeling

    CERN Document Server

    Johnston, Kyle B

    2016-01-01

    With the advent of digital astronomy, new benefits and new challenges have been presented to the modern day astronomer. No longer can the astronomer rely on manual processing, instead the profession as a whole has begun to adopt more advanced computational means. This paper focuses on the construction and application of a novel time-domain signature extraction methodology and the development of a supporting supervised pattern classification algorithm for the identification of variable stars. A methodology for the reduction of stellar variable observations (time-domain data) into a novel feature space representation is introduced. The methodology presented will be referred to as Slotted Symbolic Markov Modeling (SSMM) and has a number of advantages which will be demonstrated to be beneficial; specifically to the supervised classification of stellar variables. It will be shown that the methodology outperformed a baseline standard methodology on a standardized set of stellar light curve data. The performance on ...

  8. Environmental versus demographic variability in stochastic predator-prey models

    Science.gov (United States)

    Dobramysl, U.; Täuber, U. C.

    2013-10-01

    In contrast to the neutral population cycles of the deterministic mean-field Lotka-Volterra rate equations, including spatial structure and stochastic noise in models for predator-prey interactions yields complex spatio-temporal structures associated with long-lived erratic population oscillations. Environmental variability in the form of quenched spatial randomness in the predation rates results in more localized activity patches. Our previous study showed that population fluctuations in rare favorable regions in turn cause a remarkable increase in the asymptotic densities of both predators and prey. Very intriguing features are found when variable interaction rates are affixed to individual particles rather than lattice sites. Stochastic dynamics with demographic variability in conjunction with inheritable predation efficiencies generate non-trivial time evolution for the predation rate distributions, yet with overall essentially neutral optimization.

  9. A new approach for modelling variability in residential construction projects

    Directory of Open Access Journals (Sweden)

    Mehrdad Arashpour

    2013-06-01

    Full Text Available The construction industry is plagued by long cycle times caused by variability in the supply chain. Variations or undesirable situations are the result of factors such as non-standard practices, work site accidents, inclement weather conditions and faults in design. This paper uses a new approach for modelling variability in construction by linking relative variability indicators to processes. Mass homebuilding sector was chosen as the scope of the analysis because data is readily available. Numerous simulation experiments were designed by varying size of capacity buffers in front of trade contractors, availability of trade contractors, and level of variability in homebuilding processes. The measurements were shown to lead to an accurate determination of relationships between these factors and production parameters. The variability indicator was found to dramatically affect the tangible performance measures such as home completion rates. This study provides for future analysis of the production homebuilding sector, which may lead to improvements in performance and a faster product delivery to homebuyers. 

  10. A new approach for modelling variability in residential construction projects

    Directory of Open Access Journals (Sweden)

    Mehrdad Arashpour

    2013-06-01

    Full Text Available The construction industry is plagued by long cycle times caused by variability in the supply chain. Variations or undesirable situations are the result of factors such as non-standard practices, work site accidents, inclement weather conditions and faults in design. This paper uses a new approach for modelling variability in construction by linking relative variability indicators to processes. Mass homebuilding sector was chosen as the scope of the analysis because data is readily available. Numerous simulation experiments were designed by varying size of capacity buffers in front of trade contractors, availability of trade contractors, and level of variability in homebuilding processes. The measurements were shown to lead to an accurate determination of relationships between these factors and production parameters. The variability indicator was found to dramatically affect the tangible performance measures such as home completion rates. This study provides for future analysis of the production homebuilding sector, which may lead to improvements in performance and a faster product delivery to homebuyers.

  11. Modeling temporal and spatial variability of crop yield

    Science.gov (United States)

    Bonetti, S.; Manoli, G.; Scudiero, E.; Morari, F.; Putti, M.; Teatini, P.

    2014-12-01

    In a world of increasing food insecurity the development of modeling tools capable of supporting on-farm decision making processes is highly needed to formulate sustainable irrigation practices in order to preserve water resources while maintaining adequate crop yield. The design of these practices starts from the accurate modeling of soil-plant-atmosphere interaction. We present an innovative 3D Soil-Plant model that couples 3D hydrological soil dynamics with a mechanistic description of plant transpiration and photosynthesis, including a crop growth module. Because of its intrinsically three dimensional nature, the model is able to capture spatial and temporal patterns of crop yield over large scales and under various climate and environmental factors. The model is applied to a 25 ha corn field in the Venice coastland, Italy, that has been continuously monitored over the years 2010 and 2012 in terms of both hydrological dynamics and yield mapping. The model results satisfactorily reproduce the large variability observed in maize yield (from 2 to 15 ton/ha). This variability is shown to be connected to the spatial heterogeneities of the farmland, which is characterized by several sandy paleo-channels crossing organic-rich silty soils. Salt contamination of soils and groundwater in a large portion of the area strongly affects the crop yield, especially outside the paleo-channels, where measured salt concentrations are lower than the surroundings. The developed model includes a simplified description of the effects of salt concentration in soil water on transpiration. The results seem to capture accurately the effects of salt concentration and the variability of the climatic conditions occurred during the three years of measurements. This innovative modeling framework paves the way to future large scale simulations of farmland dynamics.

  12. Latent variables and structural equation models for longitudinal relationships: an illustration in nutritional epidemiology

    Directory of Open Access Journals (Sweden)

    Basdevant Arnaud

    2010-04-01

    Full Text Available Abstract Background The use of structural equation modeling and latent variables remains uncommon in epidemiology despite its potential usefulness. The latter was illustrated by studying cross-sectional and longitudinal relationships between eating behavior and adiposity, using four different indicators of fat mass. Methods Using data from a longitudinal community-based study, we fitted structural equation models including two latent variables (respectively baseline adiposity and adiposity change after 2 years of follow-up, each being defined, by the four following anthropometric measurement (respectively by their changes: body mass index, waist circumference, skinfold thickness and percent body fat. Latent adiposity variables were hypothesized to depend on a cognitive restraint score, calculated from answers to an eating-behavior questionnaire (TFEQ-18, either cross-sectionally or longitudinally. Results We found that high baseline adiposity was associated with a 2-year increase of the cognitive restraint score and no convincing relationship between baseline cognitive restraint and 2-year adiposity change could be established. Conclusions The latent variable modeling approach enabled presentation of synthetic results rather than separate regression models and detailed analysis of the causal effects of interest. In the general population, restrained eating appears to be an adaptive response of subjects prone to gaining weight more than as a risk factor for fat-mass increase.

  13. An inequality for correlations in unidimensional monotone latent variable models for binary variables.

    Science.gov (United States)

    Ellis, Jules L

    2014-04-01

    It is shown that a unidimensional monotone latent variable model for binary items implies a restriction on the relative sizes of item correlations: The negative logarithm of the correlations satisfies the triangle inequality. This inequality is not implied by the condition that the correlations are nonnegative, the criterion that coefficient H exceeds 0.30, or manifest monotonicity. The inequality implies both a lower bound and an upper bound for each correlation between two items, based on the correlations of those two items with every possible third item. It is discussed how this can be used in Mokken's (A theory and procedure of scale-analysis, Mouton, The Hague, 1971) scale analysis.

  14. Intraseasonal Variability in an Aquaplanet General Circulation Model

    Directory of Open Access Journals (Sweden)

    Adam H Sobel

    2010-04-01

    Full Text Available An aquaplanet atmospheric general circulation model simulation with a robust intraseasonal oscillation is analyzed. The SST boundary condition resembles the observed December-April average with continents omitted, although with the meridional SST gradient reduced to be one-quarter of that observed poleward of 10 ̊ latitude. Slow, regular eastward propagation at 5 m s21 in winds and precipitation with amplitude greater than that in the observed MJO is clearly identified in unfiltered fields. Local precipitation rate is a strongly non-linear and increasing function of column precipitable water, as in observations. The model intraseasonal oscillation resembles a moisture mode that is destabilized by wind-evaporation feedback, and that propagates eastward through advection of anomalous humidity by the sum of perturbation winds and mean westerly flow. A series of sensitivity experiments are conducted to test hypothesized mechanisms. A mechanism denial experiment in which intraseasonal latent heat flux variability is removed largely eliminates intraseasonal wind and precipitation variability. Reducing the lower-troposphere westerly flow in the warm pool by reducing the zonal SST gradient slows eastward propagation, supporting the importance of horizontal advection by the low-level wind to eastward propagation. A zonally symmetric SST basic state produces weak and unrealistic intraseasonal variability between 30 and 90 day timescales, indicating the importance of mean low-level westerly winds and hence a realistic phase relationship between precipitation and surface flux anomalies for producing realistic tropical intraseasonal variability.

  15. Classification criteria of syndromes by latent variable models

    DEFF Research Database (Denmark)

    Petersen, Janne

    2010-01-01

    The thesis has two parts; one clinical part: studying the dimensions of human immunodeficiency virus associated lipodystrophy syndrome (HALS) by latent class models, and a more statistical part: investigating how to predict scores of latent variables so these can be used in subsequent regression...... analyses. Part 1: HALS engages different phenotypic changes of peripheral lipoatrophy and central lipohypertrophy.  There are several different definitions of HALS and no consensus on the number of phenotypes. Many of the definitions consist of counting fulfilled criteria on markers and do not include...... patient's characteristics. These methods may erroneously reduce multiplicity either by combining markers of different phenotypes or by mixing HALS with other processes such as aging. Latent class models identify homogenous groups of patients based on sets of variables, for example symptoms. As no gold...

  16. Explicit estimating equations for semiparametric generalized linear latent variable models

    KAUST Repository

    Ma, Yanyuan

    2010-07-05

    We study generalized linear latent variable models without requiring a distributional assumption of the latent variables. Using a geometric approach, we derive consistent semiparametric estimators. We demonstrate that these models have a property which is similar to that of a sufficient complete statistic, which enables us to simplify the estimating procedure and explicitly to formulate the semiparametric estimating equations. We further show that the explicit estimators have the usual root n consistency and asymptotic normality. We explain the computational implementation of our method and illustrate the numerical performance of the estimators in finite sample situations via extensive simulation studies. The advantage of our estimators over the existing likelihood approach is also shown via numerical comparison. We employ the method to analyse a real data example from economics. © 2010 Royal Statistical Society.

  17. Predictive modeling and reducing cyclic variability in autoignition engines

    Energy Technology Data Exchange (ETDEWEB)

    Hellstrom, Erik; Stefanopoulou, Anna; Jiang, Li; Larimore, Jacob

    2016-08-30

    Methods and systems are provided for controlling a vehicle engine to reduce cycle-to-cycle combustion variation. A predictive model is applied to predict cycle-to-cycle combustion behavior of an engine based on observed engine performance variables. Conditions are identified, based on the predicted cycle-to-cycle combustion behavior, that indicate high cycle-to-cycle combustion variation. Corrective measures are then applied to prevent the predicted high cycle-to-cycle combustion variation.

  18. Random spatial processes and geostatistical models for soil variables

    Science.gov (United States)

    Lark, R. M.

    2009-04-01

    Geostatistical models of soil variation have been used to considerable effect to facilitate efficient and powerful prediction of soil properties at unsampled sites or over partially sampled regions. Geostatistical models can also be used to investigate the scaling behaviour of soil process models, to design sampling strategies and to account for spatial dependence in the random effects of linear mixed models for spatial variables. However, most geostatistical models (variograms) are selected for reasons of mathematical convenience (in particular, to ensure positive definiteness of the corresponding variables). They assume some underlying spatial mathematical operator which may give a good description of observed variation of the soil, but which may not relate in any clear way to the processes that we know give rise to that observed variation in the real world. In this paper I shall argue that soil scientists should pay closer attention to the underlying operators in geostatistical models, with a view to identifying, where ever possible, operators that reflect our knowledge of processes in the soil. I shall illustrate how this can be done in the case of two problems. The first exemplar problem is the definition of operators to represent statistically processes in which the soil landscape is divided into discrete domains. This may occur at disparate scales from the landscape (outcrops, catchments, fields with different landuse) to the soil core (aggregates, rhizospheres). The operators that underly standard geostatistical models of soil variation typically describe continuous variation, and so do not offer any way to incorporate information on processes which occur in discrete domains. I shall present the Poisson Voronoi Tessellation as an alternative spatial operator, examine its corresponding variogram, and apply these to some real data. The second exemplar problem arises from different operators that are equifinal with respect to the variograms of the

  19. Computational Fluid Dynamics Modeling of a Supersonic Nozzle and Integration into a Variable Cycle Engine Model

    Science.gov (United States)

    Connolly, Joseph W.; Friedlander, David; Kopasakis, George

    2015-01-01

    This paper covers the development of an integrated nonlinear dynamic simulation for a variable cycle turbofan engine and nozzle that can be integrated with an overall vehicle Aero-Propulso-Servo-Elastic (APSE) model. A previously developed variable cycle turbofan engine model is used for this study and is enhanced here to include variable guide vanes allowing for operation across the supersonic flight regime. The primary focus of this study is to improve the fidelity of the model's thrust response by replacing the simple choked flow equation convergent-divergent nozzle model with a MacCormack method based quasi-1D model. The dynamic response of the nozzle model using the MacCormack method is verified by comparing it against a model of the nozzle using the conservation element/solution element method. A methodology is also presented for the integration of the MacCormack nozzle model with the variable cycle engine.

  20. Attributing Sources of Variability in Regional Climate Model Experiments

    Science.gov (United States)

    Kaufman, C. G.; Sain, S. R.

    2008-12-01

    Variability in regional climate model (RCM) projections may be due to a number of factors, including the choice of RCM itself, the boundary conditions provided by a driving general circulation model (GCM), and the choice of emission scenario. We describe a new statistical methodology, Gaussian Process ANOVA, which allows us to decompose these sources of variability while also taking account of correlations in the output across space. Our hierarchical Bayesian framework easily allows joint inference about high probability envelopes for the functions, as well as decompositions of total variance that vary over the domain of the functions. These may be used to create maps illustrating the magnitude of each source of variability across the domain of the regional model. We use this method to analyze temperature and precipitation data from the Prudence Project, an RCM intercomparison project in which RCMs were crossed with GCM forcings and scenarios in a designed experiment. This work was funded by the North American Regional Climate Change Assessment Program (NARCCAP).

  1. Modeling KIC10684673 and KIC12216817 as Single Pulsating Variables

    CERN Document Server

    Turner, Garrison

    2016-01-01

    The raw light curves of both KIC 10684673 and KIC 12216817 show variability. Both are listed in the Kepler Eclipsing Binary Catalog (hereafter KEBC), however both are flagged as uncertain in nature. In the present study we show their light curves can be modeled by considering each target as a single, multi-modal delta Scuti pulsator. While this does not exclude the possibility of eclipsing systems, we argue, while spectroscopy on the systems is still lacking, the delta Scuti model is a simpler explanation and therefore more probable.

  2. Viscous Dark Energy Models with Variable G and Λ

    Institute of Scientific and Technical Information of China (English)

    Arbab I. Arbab

    2008-01-01

    We consider a cosmological model with bulk viscosity η and variable cosmological Λ∝ρ-α, alpha =const and gravitational G constants. The model exhibits many interesting cosmological features. Inflation proceeds du to the presence of bulk viscosity and dark energy without requiring the equation of state p = -ρ. During the inflationary era the energy density ρ does not remain constant, as in the de-Sitter type. Moreover, the cosmological and gravitational constants increase exponentially with time, whereas the energy density and viscosity decrease exponentially with time. The rate of mass creation during inflation is found to be very huge suggesting that all matter in the universe is created during inflation.

  3. Comparison of multiaxial fatigue damage models under variable amplitude loading

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Hong; Shang, De Guang; Tian, Yu Jie [Beijing Univ. of Technology, Beijing (China); Liu, Jian Zhong [Beijing Institute of Aeronautical Materials, Beijing (China)

    2012-11-15

    Based on the cycle counting method of Wang and Brown and on the linear accumulation damage rule of Miner, four multiaxial fatigue damage models without any weight factors proposed by Pan et al., Varvani Farahani, Shang and Wang, and Shang et al. are used to compute fatigue damage. The procedure is evaluated using the low cycle fatigue experimental data of 7050 T7451 aluminum alloy and En15R steel under tension/torsion variable amplitude loading. The results reveal that the procedure is convenient for engineering design and application, and that the four multiaxial fatigue damage models provide good life estimates.

  4. Modeling Surgery: A New Way Toward Understanding Earth Climate Variability

    Institute of Scientific and Technical Information of China (English)

    WU Lixin; LIU Zhengyu; Robert Gallimore; Michael Notaro; Robert Jacob

    2005-01-01

    A new modeling concept, referred to as Modeling Surgery, has been recently developed at University of Wisconsin-Madison. It is specifically designed to diagnose coupled feedbacks between different climate components as well as climatic teleconnections within a specific component through systematically modifying the coupling configurations and teleconnective pathways. It thus provides a powerful means for identifying the causes and mechanisms of low-frequency variability in the Earth's climate system. In this paper, we will give a short review of our recent progress in this new area.

  5. A model for Faraday pilot-waves over variable topography

    Science.gov (United States)

    Faria, Luiz

    2016-11-01

    In 2005 Yves Couder and co-workers discovered that droplets walking on a vibrating bath posses certain features previously thought to be exclusive to quantum systems. These millimetric droplets synchronize with their Faraday wavefield, creating a macroscopic pilot-wave system. In this talk we exploit the fact that the waves generated are nearly monochromatic and propose a hydrodynamic model capable of capturing the interaction between bouncing drops and a variable topography. We show that our model is able to reproduce some important experiments involving the drop-topography interaction, such as non-specular reflection and single-slit diffraction.

  6. A model for Faraday pilot waves over variable topography

    Science.gov (United States)

    Faria, Luiz M.

    2017-01-01

    Couder and Fort discovered that droplets walking on a vibrating bath possess certain features previously thought to be exclusive to quantum systems. These millimetric droplets synchronize with their Faraday wavefield, creating a macroscopic pilot-wave system. In this paper we exploit the fact that the waves generated are nearly monochromatic and propose a hydrodynamic model capable of quantitatively capturing the interaction between bouncing drops and a variable topography. We show that our reduced model is able to reproduce some important experiments involving the drop-topography interaction, such as non-specular reflection and single-slit diffraction.

  7. Sensitivity Analysis of the ALMANAC Model's Input Variables

    Institute of Scientific and Technical Information of China (English)

    XIE Yun; James R.Kiniry; Jimmy R.Williams; CHEN You-min; LIN Er-da

    2002-01-01

    Crop models often require extensive input data sets to realistically simulate crop growth. Development of such input data sets can be difficult for some model users. The objective of this study was to evaluate the importance of variables in input data sets for crop modeling. Based on published hybrid performance trials in eight Texas counties, we developed standard data sets of 10-year simulations of maize and sorghum for these eight counties with the ALMANAC (Agricultural Land Management Alternatives with Numerical Assessment Criteria) model. The simulation results were close to the measured county yields with relative error only 2.6%for maize, and - 0.6% for sorghum. We then analyzed the sensitivity of grain yield to solar radiation, rainfall, soil depth, soil plant available water, and runoff curve number, comparing simulated yields to those with the original, standard data sets. Runoff curve number changes had the greatest impact on simulated maize and sorghum yields for all the counties. The next most critical input was rainfall, and then solar radiation for both maize and sorghum, especially for the dryland condition. For irrigated sorghum, solar radiation was the second most critical input instead of rainfall. The degree of sensitivity of yield to all variables for maize was larger than for sorghum except for solar radiation. Many models use a USDA curve number approach to represent soil water redistribution, so it will be important to have accurate curve numbers, rainfall, and soil depth to realistically simulate yields.

  8. Challenge and Urgency in Defining Doctoral Education in Marriage and Family Therapy: Valuing Complementary Models

    Science.gov (United States)

    Wampler, Karen S.

    2010-01-01

    In this overview, I comment on the strong theme of the need to define and improve the quality of doctoral education in marriage and family therapy that pervades the three essays. Deficits in research training are the central concern, although the essayists take different perspectives on the nature of the research training needed. The different…

  9. Disambiguating Seesaw Models using Invariant Mass Variables at Hadron Colliders

    CERN Document Server

    Dev, P S Bhupal; Mohapatra, Rabindra N

    2015-01-01

    We propose ways to distinguish between different mechanisms behind the collider signals of TeV-scale seesaw models for neutrino masses using kinematic endpoints of invariant mass variables. We particularly focus on two classes of such models widely discussed in literature: (i) Standard Model extended by the addition of singlet neutrinos and (ii) Left-Right Symmetric Models. Relevant scenarios involving the same "smoking-gun" collider signature of dilepton plus dijet with no missing transverse energy differ from one another by their event topology, resulting in distinctive relationships among the kinematic endpoints to be used for discerning them at hadron colliders. These kinematic endpoints are readily translated to the mass parameters of the on-shell particles through simple analytic expressions which can be used for measuring the masses of the new particles. A Monte Carlo simulation with detector effects is conducted to test the viability of the proposed strategy in a realistic environment. Finally, we dis...

  10. Modelling of W UMa-type variable stars

    Directory of Open Access Journals (Sweden)

    P. L. Skelton

    2010-01-01

    Full Text Available W Ursae Majoris (W UMa-type variable stars are over-contact eclipsing binary stars. To understand how these systems form and evolve requires observations spanning many years, followed by detailed models of as many of them as possible. The All Sky Automated Survey (ASAS has an extensive database of these stars. Using the ASAS V band photometric data, models of W UMatype stars are being created to determine the parameters of these stars. This paper discusses the classification of eclipsing binary stars, the methods used to model them as well as the results of the modelling of ASAS 120036–3915.6, an over-contact eclipsing binary star that appears to be changing its period.

  11. Comparative Analysis of Visco-elastic Models with Variable Parameters

    Directory of Open Access Journals (Sweden)

    Silviu Nastac

    2010-01-01

    Full Text Available The paper presents a theoretical comparative study for computational behaviour analysis of vibration isolation elements based on viscous and elastic models with variable parameters. The changing of elastic and viscous parameters can be produced by natural timed evolution demo-tion or by heating developed into the elements during their working cycle. It was supposed both linear and non-linear numerical viscous and elastic models, and their combinations. The results show the impor-tance of numerical model tuning with the real behaviour, as such the characteristics linearity, and the essential parameters for damping and rigidity. Multiple comparisons between linear and non-linear simulation cases dignify the basis of numerical model optimization regarding mathematical complexity vs. results reliability.

  12. Variable structure control of nonlinear systems through simplified uncertain models

    Science.gov (United States)

    Sira-Ramirez, Hebertt

    1986-01-01

    A variable structure control approach is presented for the robust stabilization of feedback equivalent nonlinear systems whose proposed model lies in the same structural orbit of a linear system in Brunovsky's canonical form. An attempt to linearize exactly the nonlinear plant on the basis of the feedback control law derived for the available model results in a nonlinearly perturbed canonical system for the expanded class of possible equivalent control functions. Conservatism tends to grow as modeling errors become larger. In order to preserve the internal controllability structure of the plant, it is proposed that model simplification be carried out on the open-loop-transformed system. As an example, a controller is developed for a single link manipulator with an elastic joint.

  13. ORGANIZING SCENARIO VARIABLES BY APPLYING THE INTERPRETATIVE STRUCTURAL MODELING (ISM

    Directory of Open Access Journals (Sweden)

    Daniel Estima de Carvalho

    2009-10-01

    Full Text Available The scenario building method is a thought mode - taken to effect in an optimized, strategic manner - based on trends and uncertain events, concerning a large variety of potential results that may impact the future of an organization.In this study, the objective is to contribute towards a possible improvement in Godet and Schoemaker´s scenario preparation methods, by employing the Interpretative Structural Modeling (ISM as a tool for the analysis of variables.Given this is an exploratory theme, bibliographical research with tool definition and analysis, examples extraction from literature and a comparison exercise of referred methods, were undertaken.It was verified that ISM may substitute or complement the original tools for the analysis of variables of scenarios per Godet and Schoemaker’s methods, given the fact that it enables an in-depth analysis of relations between variables in a shorter period of time, facilitating both structuring and construction of possible scenarios.Key-words: Strategy. Future studies. Interpretative Structural Modeling.

  14. GEOCHEMICAL MODELING OF F AREA SEEPAGE BASIN COMPOSITION AND VARIABILITY

    Energy Technology Data Exchange (ETDEWEB)

    Millings, M.; Denham, M.; Looney, B.

    2012-05-08

    From the 1950s through 1989, the F Area Seepage Basins at the Savannah River Site (SRS) received low level radioactive wastes resulting from processing nuclear materials. Discharges of process wastes to the F Area Seepage Basins followed by subsequent mixing processes within the basins and eventual infiltration into the subsurface resulted in contamination of the underlying vadose zone and downgradient groundwater. For simulating contaminant behavior and subsurface transport, a quantitative understanding of the interrelated discharge-mixing-infiltration system along with the resulting chemistry of fluids entering the subsurface is needed. An example of this need emerged as the F Area Seepage Basins was selected as a key case study demonstration site for the Advanced Simulation Capability for Environmental Management (ASCEM) Program. This modeling evaluation explored the importance of the wide variability in bulk wastewater chemistry as it propagated through the basins. The results are intended to generally improve and refine the conceptualization of infiltration of chemical wastes from seepage basins receiving variable waste streams and to specifically support the ASCEM case study model for the F Area Seepage Basins. Specific goals of this work included: (1) develop a technically-based 'charge-balanced' nominal source term chemistry for water infiltrating into the subsurface during basin operations, (2) estimate the nature of short term and long term variability in infiltrating water to support scenario development for uncertainty quantification (i.e., UQ analysis), (3) identify key geochemical factors that control overall basin water chemistry and the projected variability/stability, and (4) link wastewater chemistry to the subsurface based on monitoring well data. Results from this study provide data and understanding that can be used in further modeling efforts of the F Area groundwater plume. As identified in this study, key geochemical factors

  15. Defining a Simulation Capability Hierarchy for the Modeling of a SeaBase Enabler (SBE)

    Science.gov (United States)

    2010-09-01

    ORGANIZATION REPORT NUMBER 9. SPONSORING / MONITORING AGENCY NAME(S) AND ADDRESS(ES) N/A 10. SPONSORING/ MONITORING AGENCY REPORT NUMBER 11...Adjudicator setup Adjudicator adjudicator = new Adjudicator(); hostileSensor.addSimEventListener(adjudicator); //Property dumper setup...TCraft extends BasicLinearMover { /* * State Variables */ // monitors cargo carried on TCraft protected double cargo; // monitors

  16. Bayesian nonparametric centered random effects models with variable selection.

    Science.gov (United States)

    Yang, Mingan

    2013-03-01

    In a linear mixed effects model, it is common practice to assume that the random effects follow a parametric distribution such as a normal distribution with mean zero. However, in the case of variable selection, substantial violation of the normality assumption can potentially impact the subset selection and result in poor interpretation and even incorrect results. In nonparametric random effects models, the random effects generally have a nonzero mean, which causes an identifiability problem for the fixed effects that are paired with the random effects. In this article, we focus on a Bayesian method for variable selection. We characterize the subject-specific random effects nonparametrically with a Dirichlet process and resolve the bias simultaneously. In particular, we propose flexible modeling of the conditional distribution of the random effects with changes across the predictor space. The approach is implemented using a stochastic search Gibbs sampler to identify subsets of fixed effects and random effects to be included in the model. Simulations are provided to evaluate and compare the performance of our approach to the existing ones. We then apply the new approach to a real data example, cross-country and interlaboratory rodent uterotrophic bioassay.

  17. Shared Variable Oriented Parallel Precompiler for SPMD Model

    Institute of Scientific and Technical Information of China (English)

    1995-01-01

    For the moment,commercial parallel computer systems with distributed memory architecture are usually provided with parallel FORTRAN or parallel C compliers,which are just traditional sequential FORTRAN or C compilers expanded with communication statements.Programmers suffer from writing parallel programs with communication statements. The Shared Variable Oriented Parallel Precompiler (SVOPP) proposed in this paper can automatically generate appropriate communication statements based on shared variables for SPMD(Single Program Multiple Data) computation model and greatly ease the parallel programming with high communication efficiency.The core function of parallel C precompiler has been successfully verified on a transputer-based parallel computer.Its prominent performance shows that SVOPP is probably a break-through in parallel programming technique.

  18. Crack simulation models in variable amplitude loading - a review

    Directory of Open Access Journals (Sweden)

    Luiz Carlos H. Ricardo

    2016-02-01

    Full Text Available This work presents a review of crack propagation simulation models considering plane stress and plane strain conditions. It is presented also a chronological different methodologies used to perform the crack advance by finite element method. Some procedures used to edit variable spectrum loading and the effects during crack propagation processes, like retardation, in the fatigue life of the structures are discussed. Based on this work there is no consensus in the scientific community to determine the best way to simulate crack propagation under variable spectrum loading due the combination of metallurgic and mechanical factors regarding, for example, how to select and edit the representative spectrum loading to be used in the crack propagation simulation.

  19. Modeling SEPs and Their Variability in the Inner Heliosphere

    Science.gov (United States)

    Mays, M. L.; Luhmann, J. G.; Odstrcil, D.; Schwadron, N.; Gorby, M.; Bain, H. M.; Mewaldt, R. A.; Gold, R. E.

    2015-12-01

    In preparation for Solar Probe Plus and Solar Orbiter we consider a series of SEP modeling experiments based on the global MHD WSA-ENLIL model. The models include the Solar Energetic Particle Model (SEPMOD) (Luhmann et al., 2007; 2010) and the Earth-Moon-Mars Radiation Environment Module (EMMREM) (Schwadron et al., 2010)). WSA-ENLIL provides a time-dependent background heliospheric description including CME-like clouds which can generate shocks during their propagation. SEPMOD makes use of the ENLIL-provided magnetic topologies of observer-connected magnetic field lines and all plasma and shock properties along those field lines. The model injects protons onto a sequence observer field lines at intensities dependent on the connected shock source strength which are then integrated at the observer to approximate the proton flux. EMMREM couples with MHD models such as ENLIL and computes energetic particle distributions based on the focused transport equation along a Lagrangian grid of nodes that propagate out with the solar wind. In this presentation we compare SEP modeling results with data, and consider SEP variability in longitude and latitude. Additionally we study the relative importance of observer-connectivity to the solar source and shock locations, as derived from ENLIL. We evaluate the shock geometry and compare model-derived shock parameters with those observed. Finally, we test the effect of the seed population on the resulting profiles.

  20. Testing biomechanical models of human lumbar lordosis variability.

    Science.gov (United States)

    Castillo, Eric R; Hsu, Connie; Mair, Ross W; Lieberman, Daniel E

    2017-05-01

    Lumbar lordosis (LL) is a key adaptation for bipedalism, but factors underlying curvature variations remain unclear. This study tests three biomechanical models to explain LL variability. Thirty adults (15 male, 15 female) were scanned using magnetic resonance imaging (MRI), a standing posture analysis was conducted, and lumbar range of motion (ROM) was assessed. Three measures of LL were compared. The trunk's center of mass was estimated from external markers to calculate hip moments (Mhip ) and lumbar flexion moments. Cross-sectional areas of lumbar vertebral bodies and trunk muscles were measured from scans. Regression models tested associations between LL and the Mhip moment arm, a beam bending model, and an interaction between relative trunk strength (RTS) and ROM. Hip moments were not associated with LL. Beam bending was moderately predictive of standing but not supine LL (R(2)  = 0.25). Stronger backs and increased ROM were associated with greater LL, especially when standing (R(2)  = 0.65). The strength-flexibility model demonstrates the differential influence of RTS depending on ROM: individuals with high ROM exhibited the most LL variation with RTS, while those with low ROM showed reduced LL regardless of RTS. Hip moments appear constrained suggesting the possibility of selection, and the beam model explains some LL variability due to variations in trunk geometry. The strength-flexibility interaction best predicted LL, suggesting a tradeoff in which ROM limits the effects of back strength on LL. The strength-flexibility model may have clinical relevance for spinal alignment and pathology. This model may also suggest that straight-backed Neanderthals had reduced lumbar mobility. © 2017 Wiley Periodicals, Inc.

  1. Influence of climate model variability on projected Arctic shipping futures

    Science.gov (United States)

    Stephenson, Scott R.; Smith, Laurence C.

    2015-11-01

    Though climate models exhibit broadly similar agreement on key long-term trends, they have significant temporal and spatial differences due to intermodel variability. Such variability should be considered when using climate models to project the future marine Arctic. Here we present multiple scenarios of 21st-century Arctic marine access as driven by sea ice output from 10 CMIP5 models known to represent well the historical trend and climatology of Arctic sea ice. Optimal vessel transits from North America and Europe to the Bering Strait are estimated for two periods representing early-century (2011-2035) and mid-century (2036-2060) conditions under two forcing scenarios (RCP 4.5/8.5), assuming Polar Class 6 and open-water vessels with medium and no ice-breaking capability, respectively. Results illustrate that projected shipping viability of the Northern Sea Route (NSR) and Northwest Passage (NWP) depends critically on model choice. The eastern Arctic will remain the most reliably accessible marine space for trans-Arctic shipping by mid-century, while outcomes for the NWP are particularly model-dependent. Omitting three models (GFDL-CM3, MIROC-ESM-CHEM, and MPI-ESM-MR), our results would indicate minimal NWP potential even for routes from North America. Furthermore, the relative importance of the NSR will diminish over time as the number of viable central Arctic routes increases gradually toward mid-century. Compared to vessel class, climate forcing plays a minor role. These findings reveal the importance of model choice in devising projections for strategic planning by governments, environmental agencies, and the global maritime industry.

  2. Niche variability and its consequences for species distribution modeling.

    Directory of Open Access Journals (Sweden)

    Matt J Michel

    Full Text Available When species distribution models (SDMs are used to predict how a species will respond to environmental change, an important assumption is that the environmental niche of the species is conserved over evolutionary time-scales. Empirical studies conducted at ecological time-scales, however, demonstrate that the niche of some species can vary in response to environmental change. We use habitat and locality data of five species of stream fishes collected across seasons to examine the effects of niche variability on the accuracy of projections from Maxent, a popular SDM. We then compare these predictions to those from an alternate method of creating SDM projections in which a transformation of the environmental data to similar scales is applied. The niche of each species varied to some degree in response to seasonal variation in environmental variables, with most species shifting habitat use in response to changes in canopy cover or flow rate. SDMs constructed from the original environmental data accurately predicted the occurrences of one species across all seasons and a subset of seasons for two other species. A similar result was found for SDMs constructed from the transformed environmental data. However, the transformed SDMs produced better models in ten of the 14 total SDMs, as judged by ratios of mean probability values at known presences to mean probability values at all other locations. Niche variability should be an important consideration when using SDMs to predict future distributions of species because of its prevalence among natural populations. The framework we present here may potentially improve these predictions by accounting for such variability.

  3. Multi-Variable Model-Based Parameter Estimation Model for Antenna Radiation Pattern Prediction

    Science.gov (United States)

    Deshpande, Manohar D.; Cravey, Robin L.

    2002-01-01

    A new procedure is presented to develop multi-variable model-based parameter estimation (MBPE) model to predict far field intensity of antenna. By performing MBPE model development procedure on a single variable at a time, the present method requires solution of smaller size matrices. The utility of the present method is demonstrated by determining far field intensity due to a dipole antenna over a frequency range of 100-1000 MHz and elevation angle range of 0-90 degrees.

  4. Variable selection for distribution-free models for longitudinal zero-inflated count responses.

    Science.gov (United States)

    Chen, Tian; Wu, Pan; Tang, Wan; Zhang, Hui; Feng, Changyong; Kowalski, Jeanne; Tu, Xin M

    2016-07-20

    Zero-inflated count outcomes arise quite often in research and practice. Parametric models such as the zero-inflated Poisson and zero-inflated negative binomial are widely used to model such responses. Like most parametric models, they are quite sensitive to departures from assumed distributions. Recently, new approaches have been proposed to provide distribution-free, or semi-parametric, alternatives. These methods extend the generalized estimating equations to provide robust inference for population mixtures defined by zero-inflated count outcomes. In this paper, we propose methods to extend smoothly clipped absolute deviation (SCAD)-based variable selection methods to these new models. Variable selection has been gaining popularity in modern clinical research studies, as determining differential treatment effects of interventions for different subgroups has become the norm, rather the exception, in the era of patent-centered outcome research. Such moderation analysis in general creates many explanatory variables in regression analysis, and the advantages of SCAD-based methods over their traditional counterparts render them a great choice for addressing this important and timely issues in clinical research. We illustrate the proposed approach with both simulated and real study data. Copyright © 2016 John Wiley & Sons, Ltd.

  5. Variability modes in core flows inverted from geomagnetic field models

    CERN Document Server

    Pais, Maria A; Schaeffer, Nathanaël

    2014-01-01

    We use flows that we invert from two geomagnetic field models spanning centennial time periods (gufm1 and COV-OBS), and apply Principal Component Analysis and Singular Value Decomposition of coupled fields to extract the main modes characterizing their spatial and temporal variations. The quasi geostrophic flows inverted from both geomagnetic field models show similar features. However, COV-OBS has a less energetic mean flow and larger time variability. The statistical significance of flow components is tested from analyses performed on subareas of the whole domain. Bootstrapping methods are also used to extract robust flow features required by both gufm1 and COV-OBS. Three main empirical circulation modes emerge, simultaneously constrained by both geomagnetic field models and expected to be robust against the particular a priori used to build them. Mode 1 exhibits three large robust vortices at medium/high latitudes, with opposite circulation under the Atlantic and the Pacific hemispheres. Mode 2 interesting...

  6. Estimation in the polynomial errors-in-variables model

    Institute of Scientific and Technical Information of China (English)

    ZHANG; Sanguo

    2002-01-01

    [1]Kendall, M. G., Stuart, A., The Advanced Theory of Statistics, Vol. 2, New York: Charles Griffin, 1979.[2]Fuller, W. A., Measurement Error Models, New York: Wiley, 1987.[3]Carroll, R. J., Ruppert D., Stefanski, L. A., Measurement Error in Nonlinear Models, London: Chapman & Hall, 1995.[4]Stout, W. F., Almost Sure Convergence, New York: Academic Press, 1974,154.[5]Petrov, V. V., Sums of Independent Random Variables, New York: Springer-Verlag, 1975, 272.[6]Zhang, S. G., Chen, X. R., Consistency of modified MLE in EV model with replicated observation, Science in China, Ser. A, 2001, 44(3): 304-310.[7]Lai, T. L., Robbins, H., Wei, C. Z., Strong consistency of least squares estimates in multiple regression, J. Multivariate Anal., 1979, 9: 343-362.

  7. Initial CGE Model Results Summary Exogenous and Endogenous Variables Tests

    Energy Technology Data Exchange (ETDEWEB)

    Edwards, Brian Keith [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Boero, Riccardo [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Rivera, Michael Kelly [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-08-07

    The following discussion presents initial results of tests of the most recent version of the National Infrastructure Simulation and Analysis Center Dynamic Computable General Equilibrium (CGE) model developed by Los Alamos National Laboratory (LANL). The intent of this is to test and assess the model’s behavioral properties. The test evaluated whether the predicted impacts are reasonable from a qualitative perspective. This issue is whether the predicted change, be it an increase or decrease in other model variables, is consistent with prior economic intuition and expectations about the predicted change. One of the purposes of this effort is to determine whether model changes are needed in order to improve its behavior qualitatively and quantitatively.

  8. An Application of Latent Variable Structural Equation Modeling For Experimental Research in Educational Technology

    National Research Council Canada - National Science Library

    Hyeon Woo LEE

    2011-01-01

      AN APPLICATION OF LATENT VARIABL AN APPLICATION OF LATENT VARIABLE STRUCTURAL EQUATION MODELING FOR EXPERIMENTAL RESEARCH IN EDUCATIONAL TECHNOLOGY As the technology-enriched learning environments...

  9. Modeling intraindividual variability with repeated measures data methods and applications

    CERN Document Server

    Hershberger, Scott L

    2013-01-01

    This book examines how individuals behave across time and to what degree that behavior changes, fluctuates, or remains stable.It features the most current methods on modeling repeated measures data as reported by a distinguished group of experts in the field. The goal is to make the latest techniques used to assess intraindividual variability accessible to a wide range of researchers. Each chapter is written in a ""user-friendly"" style such that even the ""novice"" data analyst can easily apply the techniques.Each chapter features:a minimum discussion of mathematical detail;an empirical examp

  10. Estimation and variable selection for generalized additive partial linear models

    KAUST Repository

    Wang, Li

    2011-08-01

    We study generalized additive partial linear models, proposing the use of polynomial spline smoothing for estimation of nonparametric functions, and deriving quasi-likelihood based estimators for the linear parameters. We establish asymptotic normality for the estimators of the parametric components. The procedure avoids solving large systems of equations as in kernel-based procedures and thus results in gains in computational simplicity. We further develop a class of variable selection procedures for the linear parameters by employing a nonconcave penalized quasi-likelihood, which is shown to have an asymptotic oracle property. Monte Carlo simulations and an empirical example are presented for illustration. © Institute of Mathematical Statistics, 2011.

  11. Grassmann Variables and the Jaynes-Cummings Model

    CERN Document Server

    Dalton, Bryan J; Jeffers, John; Barnett, Stephen M

    2012-01-01

    This paper shows that phase space methods using a positive P type distribution function involving both c-number variables (for the cavity mode) and Grassmann variables (for the two level atom) can be used to treat the Jaynes-Cummings model. Although it is a Grassmann function, the distribution function is equivalent to six c-number functions of the two bosonic variables. Experimental quantities are given as bosonic phase space integrals involving the six functions. A Fokker-Planck equation involving both left and right Grassmann differentiation can be obtained for the distribution function, and is equivalent to six coupled equations for the six c-number functions. The approach used involves choosing the canonical form of the (non-unique) positive P distribution function, where the correspondence rules for bosonic operators are non-standard and hence the Fokker-Planck equation is also unusual. Initial conditions, such as for initially uncorrelated states, are used to determine the initial distribution function...

  12. Rheological modelling of physiological variables during temperature variations at rest

    Science.gov (United States)

    Vogelaere, P.; de Meyer, F.

    1990-06-01

    The evolution with time of cardio-respiratory variables, blood pressure and body temperature has been studied on six males, resting in semi-nude conditions during short (30 min) cold stress exposure (0°C) and during passive recovery (60 min) at 20°C. Passive cold exposure does not induce a change in HR but increases VO 2, VCO 2 Ve and core temperature T re, whereas peripheral temperature is significantly lowered. The kinetic evolution of the studied variables was investigated using a Kelvin-Voigt rheological model. The results suggest that the human body, and by extension the measured physiological variables of its functioning, does not react as a perfect viscoelastic system. Cold exposure induces a more rapid adaptation for heart rate, blood pressure and skin temperatures than that observed during the rewarming period (20°C), whereas respiratory adjustments show an opposite evolution. During the cooling period of the experiment the adaptative mechanisms, taking effect to preserve core homeothermy and to obtain a higher oxygen supply, increase the energy loss of the body.

  13. Analytic Thermoelectric Couple Modeling: Variable Material Properties and Transient Operation

    Science.gov (United States)

    Mackey, Jonathan A.; Sehirlioglu, Alp; Dynys, Fred

    2015-01-01

    To gain a deeper understanding of the operation of a thermoelectric couple a set of analytic solutions have been derived for a variable material property couple and a transient couple. Using an analytic approach, as opposed to commonly used numerical techniques, results in a set of useful design guidelines. These guidelines can serve as useful starting conditions for further numerical studies, or can serve as design rules for lab built couples. The analytic modeling considers two cases and accounts for 1) material properties which vary with temperature and 2) transient operation of a couple. The variable material property case was handled by means of an asymptotic expansion, which allows for insight into the influence of temperature dependence on different material properties. The variable property work demonstrated the important fact that materials with identical average Figure of Merits can lead to different conversion efficiencies due to temperature dependence of the properties. The transient couple was investigated through a Greens function approach; several transient boundary conditions were investigated. The transient work introduces several new design considerations which are not captured by the classic steady state analysis. The work helps to assist in designing couples for optimal performance, and also helps assist in material selection.

  14. Variable Star Signature Classification using Slotted Symbolic Markov Modeling

    Science.gov (United States)

    Johnston, K. B.; Peter, A. M.

    2017-01-01

    With the advent of digital astronomy, new benefits and new challenges have been presented to the modern day astronomer. No longer can the astronomer rely on manual processing, instead the profession as a whole has begun to adopt more advanced computational means. This paper focuses on the construction and application of a novel time-domain signature extraction methodology and the development of a supporting supervised pattern classification algorithm for the identification of variable stars. A methodology for the reduction of stellar variable observations (time-domain data) into a novel feature space representation is introduced. The methodology presented will be referred to as Slotted Symbolic Markov Modeling (SSMM) and has a number of advantages which will be demonstrated to be beneficial; specifically to the supervised classification of stellar variables. It will be shown that the methodology outperformed a baseline standard methodology on a standardized set of stellar light curve data. The performance on a set of data derived from the LINEAR dataset will also be shown.

  15. Quantifying uncertainty, variability and likelihood for ordinary differential equation models

    LENUS (Irish Health Repository)

    Weisse, Andrea Y

    2010-10-28

    Abstract Background In many applications, ordinary differential equation (ODE) models are subject to uncertainty or variability in initial conditions and parameters. Both, uncertainty and variability can be quantified in terms of a probability density function on the state and parameter space. Results The partial differential equation that describes the evolution of this probability density function has a form that is particularly amenable to application of the well-known method of characteristics. The value of the density at some point in time is directly accessible by the solution of the original ODE extended by a single extra dimension (for the value of the density). This leads to simple methods for studying uncertainty, variability and likelihood, with significant advantages over more traditional Monte Carlo and related approaches especially when studying regions with low probability. Conclusions While such approaches based on the method of characteristics are common practice in other disciplines, their advantages for the study of biological systems have so far remained unrecognized. Several examples illustrate performance and accuracy of the approach and its limitations.

  16. The Dynamics and Variability Model Intercomparison Project (DynVarMIP) for CMIP6: assessing the stratosphere–troposphere system

    OpenAIRE

    Gerber, Edwin P.; Manzini, Elisa

    2016-01-01

    Diagnostics of atmospheric momentum and energy transport are needed to investigate the origin of circulation biases in climate models and to understand the atmospheric response to natural and anthropogenic forcing. Model biases in atmospheric dynamics are one of the factors that increase uncertainty in projections of regional climate, precipitation and extreme events. Here we define requirements for diagnosing the atmospheric circulation and variability across temporal scales and for evaluati...

  17. Methodology Aspects of Quantifying Stochastic Climate Variability with Dynamic Models

    Science.gov (United States)

    Nuterman, Roman; Jochum, Markus; Solgaard, Anna

    2015-04-01

    The paleoclimatic records show that climate has changed dramatically through time. For the past few millions of years it has been oscillating between ice ages, with large parts of the continents covered with ice, and warm interglacial periods like the present one. It is commonly assumed that these glacial cycles are related to changes in insolation due to periodic changes in Earth's orbit around Sun (Milankovitch theory). However, this relationship is far from understood. The insolation changes are so small that enhancing feedbacks must be at play. It might even be that the external perturbation only plays a minor role in comparison to internal stochastic variations or internal oscillations. This claim is based on several shortcomings in the Milankovitch theory: Prior to one million years ago, the duration of the glacial cycles was indeed 41,000 years, in line with the obliquity cycle of Earth's orbit. This duration changed at the so-called Mid-Pleistocene transition to approximately 100,000 years. Moreover, according to Milankovitch's theory the interglacial of 400,000 years ago should not have happened. Thus, while prior to one million years ago the pacing of these glacial cycles may be tied to changes in Earth's orbit, we do not understand the current magnitude and phasing of the glacial cycles. In principle it is possible that the glacial/interglacial cycles are not due to variations in Earth's orbit, but due to stochastic forcing or internal modes of variability. We present a new method and preliminary results for a unified framework using a fully coupled Earth System Model (ESM), in which the leading three ice age hypotheses will be investigated together. Was the waxing and waning of ice sheets due to an internal mode of variability, due to variations in Earth's orbit, or simply due to a low-order auto-regressive process (i.e., noise integrated by system with memory)? The central idea is to use the Generalized Linear Models (GLM), which can handle both

  18. Ozone Concentration Prediction via Spatiotemporal Autoregressive Model With Exogenous Variables

    Science.gov (United States)

    Kamoun, W.; Senoussi, R.

    2009-04-01

    Forecast of environmental variables are nowadays of main concern for public health or agricultural management. In this context a large literature is devoted to spatio-temporal modelling of these variables using different statistical approaches. However, most of studies ignored the potential contribution of local (e.g. meteorological and/or geographical) covariables as well as the dynamical characteristics of observations. In this study, we present a spatiotemporal short term forecasting model for ozone concentration based on regularly observed covariables in predefined geographical sites. Our driving system simply combines a multidimensional second order autoregressive structured process with a linear regression model over influent exogenous factors and reads as follows: ‘2 ‘q j Z (t) = A (Î&,cedil;D )Ã- [ αiZ(t- i)]+ B (Î&,cedil;D )Ã- [ βjX (t)]+ ɛ(t) i=1 j=1 Z(t)=(Z1(t),…,Zn(t)) represents the vector of ozone concentration at time t of the n geographical sites, whereas Xj(t)=(X1j(t),…,Xnj(t)) denotes the jth exogenous variable observed over these sites. The nxn matrix functions A and B account for the spatial relationships between sites through the inter site distance matrix D and a vector parameter Î&.cedil; Multidimensional white noise ɛ is assumed to be Gaussian and spatially correlated but temporally independent. A covariance structure of Z that takes account of noise spatial dependences is deduced under a stationary hypothesis and then included in the likelihood function. Statistical model and estimation procedure: Contrarily to the widely used choice of a {0,1}-valued neighbour matrix A, we put forward two more natural choices of exponential or power decay. Moreover, the model revealed enough stable to readily accommodate the crude observations without the usual tedious and somewhat arbitrarily variable transformations. Data set and preliminary analysis: In our case, ozone variable represents here the daily maximum ozone

  19. Emerging technologies to create inducible and genetically defined porcine cancer models

    Directory of Open Access Journals (Sweden)

    Lawrence B Schook

    2016-02-01

    Full Text Available There is an emerging need for new animal models that address unmet translational cancer research requirements. Transgenic porcine models provide an exceptional opportunity due to their genetic, anatomic and physiological similarities with humans. Due to recent advances in the sequencing of domestic animal genomes and the development of new organism cloning technologies, it is now very feasible to utilize pigs as a malleable species, with similar anatomic and physiological features with humans, in which to develop cancer models. In this review, we discuss genetic modification technologies successfully used to produce porcine biomedical models, in particular the Cre-loxP System as well as major advances and perspectives the CRISPR/Cas9 System. Recent advancements in porcine tumor modeling and genome editing will bring porcine models to the forefront of translational cancer research.

  20. Statistical mechanics of clonal expansion in lymphocyte networks modelled with slow and fast variables

    Science.gov (United States)

    Mozeika, Alexander; Coolen, Anthony C. C.

    2017-01-01

    We use statistical mechanical techniques to model the adaptive immune system, represented by lymphocyte networks in which B cells interact with T cells and antigen. We assume that B- and T-clones evolve in different thermal noise environments and on different timescales, and derive stationary distributions and study expansion of B clones for the case where these timescales are adiabatically separated. We compute characteristics of B-clone sizes, such as average concentrations, in parameter regimes where T-clone sizes are modelled as binary variables. This analysis is independent of the network topology, and its results are qualitatively consistent with experimental observations. To obtain the full distributions of B-clone sizes we assume further that the network topologies are random and locally equivalent to trees. This allows us to compete these distributions via the Bethe-Peierls approach. As an example we calculate B-clone distributions for immune models defined on random regular networks.

  1. Separation of variables for integrable spin-boson models

    CERN Document Server

    Amico, Luigi; Osterloh, Andreas; Wirth, Tobias

    2010-01-01

    We formulate the functional Bethe ansatz for bosonic (infinite dimensional) representations of the Yang-Baxter algebra. The main deviation from the standard approach consists in a half infinite 'Sklyanin lattice' made of the eigenvalues of the operator zeros of the Bethe annihilation operator. By a separation of variables, functional TQ equations are obtained for this half infinite lattice. They provide valuable information about the spectrum of a given Hamiltonian model. We apply this procedure to integrable spin-boson models subject to both twisted and open boundary conditions. In the case of general twisted and certain open boundary conditions polynomial solutions to these TQ equations are found and we compute the spectrum of both the full transfer matrix and its quasi-classical limit. For generic open boundaries we present a two-parameter family of Bethe equations, derived from TQ equations that are compatible with polynomial solutions for Q. A connection of these parameters to the boundary fields is stil...

  2. Viscous Dark Energy Models with Variable G and A

    Science.gov (United States)

    Arbab, Arbab I.

    2008-10-01

    We consider a cosmological model with bulk viscosity η and variable cosmological A α ρ -α, alpha = const and gravitational G constants. The model exhibits many interesting cosmological features. Inflation proceeds du to the presence of bulk viscosity and dark energy without requiring the equation of state p = —ρ. During the inflationary era the energy density ρ does not remain constant, as in the de-Sitter type. Moreover, the cosmological and gravitational constants increase exponentially with time, whereas the energy density and viscosity decrease exponentially with time. The rate of mass creation during inflation is found to be very huge suggesting that all matter in the universe is created during inflation.

  3. Variable variance Preisach model for multilayers with perpendicular magnetic anisotropy

    Science.gov (United States)

    Franco, A. F.; Gonzalez-Fuentes, C.; Morales, R.; Ross, C. A.; Dumas, R.; Åkerman, J.; Garcia, C.

    2016-08-01

    We present a variable variance Preisach model that fully accounts for the different magnetization processes of a multilayer structure with perpendicular magnetic anisotropy by adjusting the evolution of the interaction variance as the magnetization changes. We successfully compare in a quantitative manner the results obtained with this model to experimental hysteresis loops of several [CoFeB/Pd ] n multilayers. The effect of the number of repetitions and the thicknesses of the CoFeB and Pd layers on the magnetization reversal of the multilayer structure is studied, and it is found that many of the observed phenomena can be attributed to an increase of the magnetostatic interactions and subsequent decrease of the size of the magnetic domains. Increasing the CoFeB thickness leads to the disappearance of the perpendicular anisotropy, and such a minimum thickness of the Pd layer is necessary to achieve an out-of-plane magnetization.

  4. Constrained variability of modeled T:ET ratio across biomes

    Science.gov (United States)

    Fatichi, Simone; Pappas, Christoforos

    2017-07-01

    A large variability (35-90%) in the ratio of transpiration to total evapotranspiration (referred here as T:ET) across biomes or even at the global scale has been documented by a number of studies carried out with different methodologies. Previous empirical results also suggest that T:ET does not covary with mean precipitation and has a positive dependence on leaf area index (LAI). Here we use a mechanistic ecohydrological model, with a refined process-based description of evaporation from the soil surface, to investigate the variability of T:ET across biomes. Numerical results reveal a more constrained range and higher mean of T:ET (70 ± 9%, mean ± standard deviation) when compared to observation-based estimates. T:ET is confirmed to be independent from mean precipitation, while it is found to be correlated with LAI seasonally but uncorrelated across multiple sites. Larger LAI increases evaporation from interception but diminishes ground evaporation with the two effects largely compensating each other. These results offer mechanistic model-based evidence to the ongoing research about the patterns of T:ET and the factors influencing its magnitude across biomes.

  5. Multivariate models of inter-subject anatomical variability.

    Science.gov (United States)

    Ashburner, John; Klöppel, Stefan

    2011-05-15

    This paper presents a very selective review of some of the approaches for multivariate modelling of inter-subject variability among brain images. It focusses on applying probabilistic kernel-based pattern recognition approaches to pre-processed anatomical MRI, with the aim of most accurately modelling the difference between populations of subjects. Some of the principles underlying the pattern recognition approaches of Gaussian process classification and regression are briefly described, although the reader is advised to look elsewhere for full implementational details. Kernel pattern recognition methods require matrices that encode the degree of similarity between the images of each pair of subjects. This review focusses on similarity measures derived from the relative shapes of the subjects' brains. Pre-processing is viewed as generative modelling of anatomical variability, and there is a special emphasis on the diffeomorphic image registration framework, which provides a very parsimonious representation of relative shapes. Although the review is largely methodological, excessive mathematical notation is avoided as far as possible, as the paper attempts to convey a more intuitive understanding of various concepts. The paper should be of interest to readers wishing to apply pattern recognition methods to MRI data, with the aim of clinical diagnosis or biomarker development. It also tries to explain that the best models are those that most accurately predict, so similar approaches should also be relevant to basic science. Knowledge of some basic linear algebra and probability theory should make the review easier to follow, although it may still have something to offer to those readers whose mathematics may be more limited. Copyright © 2010 Elsevier Inc. All rights reserved.

  6. Defining the role of polyamines in colon carcinogenesis using mouse models

    Directory of Open Access Journals (Sweden)

    Natalia A Ignatenko

    2011-01-01

    Full Text Available Genetics and diet are both considered important risk determinants for colorectal cancer, a leading cause of death in the US and worldwide. Genetically engineered mouse (GEM models have made a significant contribution to the characterization of colorectal cancer risk factors. Reliable, reproducible, and clinically relevant animal models help in the identification of the molecular events associated with disease progression and in the development of effictive treatment strategies. This review is focused on the use of mouse models for studying the role of polyamines in colon carcinogenesis. We describe how the available mouse models of colon cancer such as the multiple intestinal neoplasia (Min mice and knockout genetic models facilitate understanding of the role of polyamines in colon carcinogenesis and help in the development of a rational strategy for colon cancer chemoprevention.

  7. CORAL: building up the model for bioconcentration factor and defining it's applicability domain.

    Science.gov (United States)

    Toropov, A A; Toropova, A P; Lombardo, A; Roncaglioni, A; Benfenati, E; Gini, G

    2011-04-01

    CORAL (CORrelation And Logic) software can be used to build up the quantitative structure--property/activity relationships (QSPR/QSAR) with optimal descriptors calculated with the simplified molecular input line entry system (SMILES). We used CORAL to evaluate the applicability domain of the QSAR models, taking a model of bioconcentration factor (logBCF) as example. This model's based on a large training set of more than 1000 chemicals. To improve the model is predictivity and reliability on new compounds, we introduced a new function, which uses the Delta(obs) = logBCF(expr)--logBCF(calc) of the predictions on the chemicals in the training set. With this approach, outliers are eliminated from the phase of training. This proved useful and increased the model's predictivity. Copyright © 2011 Elsevier Masson SAS. All rights reserved.

  8. Teleconnections between Ethiopian rainfall variability and global SSTs: observations and methods for model evaluation

    Science.gov (United States)

    Degefu, Mekonnen Adnew; Rowell, David P.; Bewket, Woldeamlak

    2017-04-01

    Rainfall variability in Ethiopia has significant effects on rainfed agriculture and hydropower, so understanding its association with slowly varying global sea surface temperatures (SSTs) is potentially important for prediction purposes. We provide an overview of the seasonality and spatial variability of these teleconnections across Ethiopia. A quasi-objective method is employed to define coherent seasons and regions of SST-rainfall teleconnections for Ethiopia. We identify three seasons (March-May, MAM; July-September, JAS; and October-November, ON), which are similar to those defined by climatological rainfall totals. We also identify three new regions (Central and western Ethiopia, CW-Ethiopia; Southern Ethiopia, S-Ethiopia; and Northeast Ethiopia, NE-Ethiopia) that are complementary to those previously defined here based on distinct SST-rainfall teleconnections that are useful when predicting interannual anomalies. JAS rainfall over CW-Ethiopia is negatively associated with SSTs over the equatorial east Pacific and Indian Ocean. New regional detail is added to that previously found for the whole of East Africa, in particular that ON rainfall over S-Ethiopia is positively associated with equatorial east Pacific SSTs and with the Indian Ocean Dipole (IOD). Also, SST-to-rainfall correlations for other season-regions, and specifically for MAM in all regions, are found to be negligible. The representation of these teleconnections in the HadGEM2 and HadGEM3-GA3.0 coupled climate models shows mixed skill. Both models poorly represent the statistically significant teleconnections, except that HadGEM2 and the low resolution (N96) version of HadGEM3-GA3.0 better represent the association between the IOD and S-Ethiopian ON rainfall. Additionally, both models are able to represent the lack of SST-rainfall correlation in other seasons and other parts of Ethiopia.

  9. Teleconnections between Ethiopian rainfall variability and global SSTs: observations and methods for model evaluation

    Science.gov (United States)

    Degefu, Mekonnen Adnew; Rowell, David P.; Bewket, Woldeamlak

    2016-06-01

    Rainfall variability in Ethiopia has significant effects on rainfed agriculture and hydropower, so understanding its association with slowly varying global sea surface temperatures (SSTs) is potentially important for prediction purposes. We provide an overview of the seasonality and spatial variability of these teleconnections across Ethiopia. A quasi-objective method is employed to define coherent seasons and regions of SST-rainfall teleconnections for Ethiopia. We identify three seasons (March-May, MAM; July-September, JAS; and October-November, ON), which are similar to those defined by climatological rainfall totals. We also identify three new regions (Central and western Ethiopia, CW-Ethiopia; Southern Ethiopia, S-Ethiopia; and Northeast Ethiopia, NE-Ethiopia) that are complementary to those previously defined here based on distinct SST-rainfall teleconnections that are useful when predicting interannual anomalies. JAS rainfall over CW-Ethiopia is negatively associated with SSTs over the equatorial east Pacific and Indian Ocean. New regional detail is added to that previously found for the whole of East Africa, in particular that ON rainfall over S-Ethiopia is positively associated with equatorial east Pacific SSTs and with the Indian Ocean Dipole (IOD). Also, SST-to-rainfall correlations for other season-regions, and specifically for MAM in all regions, are found to be negligible. The representation of these teleconnections in the HadGEM2 and HadGEM3-GA3.0 coupled climate models shows mixed skill. Both models poorly represent the statistically significant teleconnections, except that HadGEM2 and the low resolution (N96) version of HadGEM3-GA3.0 better represent the association between the IOD and S-Ethiopian ON rainfall. Additionally, both models are able to represent the lack of SST-rainfall correlation in other seasons and other parts of Ethiopia.

  10. Adding Missing-Data-Relevant Variables to FIML-Based Structural Equation Models

    Science.gov (United States)

    Graham, John W.

    2003-01-01

    Conventional wisdom in missing data research dictates adding variables to the missing data model when those variables are predictive of (a) missingness and (b) the variables containing missingness. However, it has recently been shown that adding variables that are correlated with variables containing missingness, whether or not they are related to…

  11. Models with discrete latent variables for analysis of categorical data: a framework and a MATLAB MDLV toolbox.

    Science.gov (United States)

    Yu, Hsiu-Ting

    2013-12-01

    Studies in the social and behavioral sciences often involve categorical data, such as ratings, and define latent constructs underlying the research issues as being discrete. In this article, models with discrete latent variables (MDLV) for the analysis of categorical data are grouped into four families, defined in terms of two dimensions (time and sampling) of the data structure. A MATLAB toolbox (referred to as the "MDLV toolbox") was developed for applying these models in practical studies. For each family of models, model representations and the statistical assumptions underlying the models are discussed. The functions of the toolbox are demonstrated by fitting these models to empirical data from the European Values Study. The purpose of this article is to offer a framework of discrete latent variable models for data analysis, and to develop the MDLV toolbox for use in estimating each model under this framework. With this accessible tool, the application of data modeling with discrete latent variables becomes feasible for a broad range of empirical studies.

  12. A three-dimensional musculoskeletal model for gait analysis. Anatomical variability estimates.

    Science.gov (United States)

    White, S C; Yack, H J; Winter, D A

    1989-01-01

    Three-dimensional coordinates defining the origin and insertion of 40 muscle units, and bony landmarks for osteometric scaling were identified on dry bone specimens. Interspecimen coordinate differences along the anterior-posterior axis of the pelvis and the long bone axes of the pelvis, femur and leg were reduced by scaling but landmark differences along the other axes were not. The coordinates were mapped to living subjects using close-range photogrammetry to locate superficial reference markers. The error of predicting the positions of internal coordinates was assessed by comparing joint centre locations calculated from local axes defining the orientation of segments superior and inferior to a joint. A difference was attributed to: anatomical variability not accounted for by scaling; errors in identifying and placing reference landmarks; the accuracy of locating markers using photogrammetry and error introduced by marker oscillation during movement. Anatomical differences between specimens are one source of error in defining a musculoskeletal model but larger errors are introduced when such models are mapped to living subjects.

  13. Defining Soil Materials for 3-D Models of the Near Surface: Preliminary Findings

    Science.gov (United States)

    2012-03-01

    geologic models that were consistent with geologic architecture. A transi- tion-probability geostatistics package – TPROGS for GMS – was used to...with transition probability geostatistics . University of California at Davis. Ann Arbor: UMI Dissertation Services. ERDC/GSL TR-12-9 44 Appendix...modeling geologic features in three dimensions for sensor simulation. 15. SUBJECT TERMS Geostatistics GEOTACS GMS Shallow subsurface Soil

  14. Drug Absorption Modeling as a Tool to Define the Strategy in Clinical Formulation Development

    OpenAIRE

    Kuentz, Martin

    2008-01-01

    The purpose of this mini review is to discuss the use of physiologically-based drug absorption modeling to guide the formulation development. Following an introduction to drug absorption modeling, this article focuses on the preclinical formulation development. Case studies are presented, where the emphasis is not only the prediction of absolute exposure values, but also their change with altered input values. Sensitivity analysis of technologically relevant parameters, like the drug’s partic...

  15. Disambiguating seesaw models using invariant mass variables at hadron colliders

    Energy Technology Data Exchange (ETDEWEB)

    Dev, P.S. Bhupal [Consortium for Fundamental Physics, School of Physics and Astronomy,University of Manchester, Manchester M13 9PL (United Kingdom); Physik-Department T30d, Technische Univertität München,James-Franck-Straße 1, 85748 Garching (Germany); Kim, Doojin [Department of Physics, University of Florida,Gainesville, FL 32611 (United States); Mohapatra, Rabindra N. [Maryland Center for Fundamental Physics and Department of Physics,University of Maryland,College Park, Maryland 20742 (United States)

    2016-01-19

    We propose ways to distinguish between different mechanisms behind the collider signals of TeV-scale seesaw models for neutrino masses using kinematic endpoints of invariant mass variables. We particularly focus on two classes of such models widely discussed in literature: (i) Standard Model extended by the addition of singlet neutrinos and (ii) Left-Right Symmetric Models. Relevant scenarios involving the same “smoking-gun” collider signature of dilepton plus dijet with no missing transverse energy differ from one another by their event topology, resulting in distinctive relationships among the kinematic endpoints to be used for discerning them at hadron colliders. These kinematic endpoints are readily translated to the mass parameters of the on-shell particles through simple analytic expressions which can be used for measuring the masses of the new particles. A Monte Carlo simulation with detector effects is conducted to test the viability of the proposed strategy in a realistic environment. Finally, we discuss the future prospects of testing these scenarios at the √s=14 and 100 TeV hadron colliders.

  16. Mixed-model Regression for Variable-star Photometry

    Science.gov (United States)

    Dose, Eric

    2016-05-01

    Mixed-model regression, a recent advance from social-science statistics, applies directly to reducing one night's photometric raw data, especially for variable stars in fields with multiple comparison stars. One regression model per filter/passband yields any or all of: transform values, extinction values, nightly zero-points, rapid zero-point fluctuations ("cirrus effect"), ensemble comparisons, vignette and gradient removal arising from incomplete flat-correction, check-star and target-star magnitudes, and specific indications of unusually large catalog magnitude errors. When images from several different fields of view are included, the models improve without complicating the calculations. The mixed-model approach is generally robust to outliers and missing data points, and it directly yields 14 diagnostic plots, used to monitor data set quality and/or residual systematic errors - these diagnostic plots may in fact turn out to be the prime advantage of this approach. Also presented is initial work on a split-annulus approach to sky background estimation, intended to address the sensitivity of photometric observations to noise within the sky-background annulus.

  17. Development of a plug-in for Variability Modeling in Software Product Lines

    Directory of Open Access Journals (Sweden)

    María Lucía López-Araujo

    2012-03-01

    Full Text Available Las Líneas de Productos de Software (LPS toman ventaja económica de las similitudes y variación entre un conjunto de sistemas de software dentro de un dominio específico. La Ingeniería de Líneas de Productos de Software por lo tanto, define una serie de procesos para el desarrollo de LPS que consideran las similitudes y variación a lo largo del ciclo devida. El modelado de variabilidad, en consecuencia, es una actividad esencial en un enfoque de Ingeniería de Líneas de Productos de Software. Existen varias técnicas para modelado de variabilidad. Entre ellas resalta COVAMOF que permite modelar los puntos de variación, variantes y dependencias como entidades de primera clase, proporcionando una manera uniforme de representarlos en los diversos niveles de abstracción de una LPS. Para poder aprovechar los beneficios de COVAMOF es necesario contar con una herramienta, de otra manera el modelado y la administración de la variabilidad pueden resultar una labor ardua para el ingeniero de software. Este trabajo presenta el desarrollo de un plug-in de COVAMOF para Eclipse.Software Product Lines (SPL take economic advantage of commonality and variability among a set of software systems that exist within a specific domain. Therefore, Software Product Line Engineering defines a series of processes for the development of a SPL that consider commonality and variability during the software life cycle. Variability modeling is therefore an essential activity in a Software Product Line Engineering approach. There are several techniques for variability modeling nowadays. COVAMOF stands out among them since it allows the modeling of variation points, variants and dependencies as first class elements. COVAMOF, therefore, provides an uniform manner for representing such concepts in different levels of abstraction within a SPL. In order to take advantage of COVAMOF benefits, it is necessary to have a computer aided tool, otherwise variability modeling and

  18. Variable Temperature Blackbodies via Variable Conductance: Thermal Design, Modelling and Testing

    Science.gov (United States)

    Melzack, N.; Jones, E.; Peters, D. M.; Hurley, J. G.; Watkins, R. E. J.; Fok, S.; Sawyer, C.; Marchetaux, G.; Acreman, A.; Winkler, R.; Lowe, D.; Theocharous, T.; Montag, V.; Gibbs, D.; Pearce, A. B.; Bishop, G.; Newman, E.; Keen, S.; Stokes, J.; Pearce, A.; Stamper, R.; Cantell-Hynes, A.

    2017-02-01

    This paper presents the overall design for large (˜ 400 mm aperture) reference blackbody cavities currently under development at the Science and Technology Facilities Council Rutherford Appleton Laboratory Space Department (STFC RAL Space), in collaboration with the National Physical Laboratory (NPL). These blackbodies are designed to operate in vacuum over a temperature range from 160 K to 370 K, with an additional capability to operate at ˜ 100 K as a point of near-zero radiance. This is a challenging problem for a single blackbody. The novel thermal design presented in this paper enables one target that can physically achieve and operate successfully at both thermal extremes, whilst also meeting stringent temperature gradient requirements. The overall blackbody design is based upon a helium gas-gap heat switch and modified to allow for variable thermal conductance. The blackbody design consists of three main concentric cylinder components—an inner cavity (aluminium alloy), a radiation shield (aluminium) and an outer liquid nitrogen (LN2) jacket (stainless steel). The internal surface of the cavity is the effective radiating surface. There is a helium gas interspace surrounding the radiation shield and enclosed by the LN2 jacket and the inner cavity. The blackbodies are now at a mature stage of development. In this paper, the overall design, focusing upon the thermal design solution, is detailed. This paper will also concern the full-scale prototype breadboard model, for which results on thermal stability, spatial gradients and other sensitivities will be presented.

  19. Evaluating two model reduction approaches for large scale hedonic models sensitive to omitted variables and multicollinearity

    DEFF Research Database (Denmark)

    Panduro, Toke Emil; Thorsen, Bo Jellesmark

    2014-01-01

    evaluate two common model reduction approaches in an empirical case. The first relies on a principal component analysis (PCA) used to construct new orthogonal variables, which are applied in the hedonic model. The second relies on a stepwise model reduction based on the variance inflation index and Akaike......’s information criteria. Our empirical application focuses on estimating the implicit price of forest proximity in a Danish case area, with a dataset containing 86 relevant variables. We demonstrate that the estimated implicit price for forest proximity, while positive in all models, is clearly sensitive...

  20. Variable thickness transient ground-water flow model. Volume 3. Program listings

    Energy Technology Data Exchange (ETDEWEB)

    Reisenauer, A.E.

    1979-12-01

    The Assessment of Effectiveness of Geologic Isolation Systems (AEGIS) Program is developing and applying the methodology for assessing the far-field, long-term post-closure safety of deep geologic nuclear waste repositories. AEGIS is being performed by Pacific Northwest Laboratory (PNL) under contract with the Office of Nuclear Waste Isolation (OWNI) for the Department of Energy (DOE). One task within AEGIS is the development of methodology for analysis of the consequences (water pathway) from loss of repository containment as defined by various release scenarios. Analysis of the long-term, far-field consequences of release scenarios requires the application of numerical codes which simulate the hydrologic systems, model the transport of released radionuclides through the hydrologic systems to the biosphere, and, where applicable, assess the radiological dose to humans. Hydrologic and transport models are available at several levels of complexity or sophistication. Model selection and use are determined by the quantity and quality of input data. Model development under AEGIS and related programs provides three levels of hydrologic models, two levels of transport models, and one level of dose models (with several separate models). This is the third of 3 volumes of the description of the VTT (Variable Thickness Transient) Groundwater Hydrologic Model - second level (intermediate complexity) two-dimensional saturated groundwater flow.

  1. Resolving structural variability in network models and the brain.

    Directory of Open Access Journals (Sweden)

    Florian Klimm

    2014-03-01

    Full Text Available Large-scale white matter pathways crisscrossing the cortex create a complex pattern of connectivity that underlies human cognitive function. Generative mechanisms for this architecture have been difficult to identify in part because little is known in general about mechanistic drivers of structured networks. Here we contrast network properties derived from diffusion spectrum imaging data of the human brain with 13 synthetic network models chosen to probe the roles of physical network embedding and temporal network growth. We characterize both the empirical and synthetic networks using familiar graph metrics, but presented here in a more complete statistical form, as scatter plots and distributions, to reveal the full range of variability of each measure across scales in the network. We focus specifically on the degree distribution, degree assortativity, hierarchy, topological Rentian scaling, and topological fractal scaling--in addition to several summary statistics, including the mean clustering coefficient, the shortest path-length, and the network diameter. The models are investigated in a progressive, branching sequence, aimed at capturing different elements thought to be important in the brain, and range from simple random and regular networks, to models that incorporate specific growth rules and constraints. We find that synthetic models that constrain the network nodes to be physically embedded in anatomical brain regions tend to produce distributions that are most similar to the corresponding measurements for the brain. We also find that network models hardcoded to display one network property (e.g., assortativity do not in general simultaneously display a second (e.g., hierarchy. This relative independence of network properties suggests that multiple neurobiological mechanisms might be at play in the development of human brain network architecture. Together, the network models that we develop and employ provide a potentially useful

  2. Innovation and dynamic capabilities of the firm: Defining an assessment model

    Directory of Open Access Journals (Sweden)

    André Cherubini Alves

    2017-05-01

    Full Text Available Innovation and dynamic capabilities have gained considerable attention in both academia and practice. While one of the oldest inquiries in economic and strategy literature involves understanding the features that drive business success and a firm’s perpetuity, the literature still lacks a comprehensive model of innovation and dynamic capabilities. This study presents a model that assesses firms’ innovation and dynamic capabilities perspectives based on four essential capabilities: development, operations, management, and transaction capabilities. Data from a survey of 1,107 Brazilian manufacturing firms were used for empirical testing and discussion of the dynamic capabilities framework. Regression and factor analyses validated the model; we discuss the results, contrasting with the dynamic capabilities’ framework. Operations Capability is the least dynamic of all capabilities, with the least influence on innovation. This reinforces the notion that operations capabilities as “ordinary capabilities,” whereas management, development, and transaction capabilities better explain firms’ dynamics and innovation.

  3. STUDENT-DEFINED QUALITY BY KANO MODEL: A CASE STUDY OF ENGINEERING STUDENTS IN INDIA

    Directory of Open Access Journals (Sweden)

    Ismail Wilson Taifa

    2016-09-01

    Full Text Available Engineering Students in India like elsewhere worldwide need well designed classrooms furniture which can enable them to attend lectures without negative impact in the long run. Engineering students from India have not yet been involved in suggesting their requirements for improving the mostly out-dated furniture at their colleges. Among the available improvement techniques, Kano Model is one of the most effective improvement approaches. The main objective of the study was to identify and categorise all the main attributes regarding the classrooms furniture for the purpose of increasing student satisfaction in the long run. Kano Model has been well applied to make an exhaustive list of requirements for redesigning classroom furniture. Cronbach Alpha was computed with the help of SPSS 16.0 for validation purpose and it ranged between 0.8 and 0.9 which is a good internal consistency. Further research can be done by integrating Kano Model with Quality Function Deployment.

  4. Considerations about the Modeling of Software Defined Radio for Mobile Communications Networks

    Directory of Open Access Journals (Sweden)

    Lucian Morgos

    2009-05-01

    Full Text Available This paper presents the contribution of theauthors regarding the modeling of a software definedradio - SDR. The digital radios are base on ADCinterfaces that convert the base band signal in a digitalformat and a DSP that accomplish the demodulationaccording to a suitable digital processing algorithm. Totest the model the authors chosen a radio signal with aWeibull distribution quadrature amplitude modulationQAM that pass through a scattering medium till the RFfront end of the SDR. The fading effect was also takeninto consideration. The results of the demodulation usingthe proposed model of SDR were compared with aRayleigh distribution. The simulation results made by thehelp of MATLAB program emphasize the effects of thequantization levels of the ADC circuit.

  5. Development of comprehensive accident models for two-lane rural highways using exposure, geometry, consistency and context variables.

    Science.gov (United States)

    Cafiso, Salvatore; Di Graziano, Alessandro; Di Silvestro, Giacomo; La Cava, Grazia; Persaud, Bhagwant

    2010-07-01

    In Europe, approximately 60% of road accident fatalities occur on two-lane rural roads. Thus, research to develop and enhance explanatory and predictive models for this road type continues to be of interest in mitigating these accidents. To this end, this paper describes a novel and extensive data collection and modeling effort to define accident models for two-lane road sections based on a unique combination of exposure, geometry, consistency and context variables directly related to the safety performance. The first part of the paper documents how these were identified for the segmentation of highways into homogeneous sections. Next, is a description of the extensive data collection effort that utilized differential cinematic GPS surveys to define the horizontal alignment variables, and road safety inspections (RSIs) to quantify the other road characteristics related to safety. The final part of the paper focuses on the calibration of models for estimating the expected number of accidents on homogeneous sections that can be characterized by constant values of the explanatory variables. Several candidate models were considered for calibration using the Generalized Linear Modeling (GLM) approach. After considering the statistical significance of the parameters related to exposure, geometry, consistency and context factors, and goodness of fit statistics, 19 models were ranked and three were selected as the recommended models. The first of the three is a base model, with length and traffic as the only predictor variables; since these variables are the only ones likely to be available network-wide, this base model can be used in an empirical Bayesian calculation to conduct network screening for ranking "sites with promise" of safety improvement. The other two models represent the best statistical fits with different combinations of significant variables related to exposure, geometry, consistency and context factors. These multiple variable models can be used, with

  6. Defining Service Quality in Tramp Shipping: Conceptual Model and Empirical Evidence

    Directory of Open Access Journals (Sweden)

    Vinh V. Thai

    2014-04-01

    Full Text Available Tramp shipping constitutes a prominent segment of the shipping market. As customers increasingly seek value from service providers for low price but yet high quality services, there is a pressing need to understand critically what construe the service quality for the tramp sector. In this respect, however, no prior research has been conducted for this market segment. This study recognises the gap in the existing maritime literature and aimed to propose and validate a service quality (SQ model to address such a gap. The study employs a triangulation approach, utilising literature review, interviews and surveys to develop, refine and verify the SQ model proposed. Interviews were conducted with various parties in the tramp sector while a survey using a sample size of 343 tramp shippers and 254 tramp service providers was also conducted with tramp shippers and tramp service providers. It was revealed that the SQ model of six dimensions of Corporate Image, Customer Focus, Management, Outcomes, Personnel and Technical, and their 18 associated attributes could be used as a reliable tool to measure service quality in tramp shipping. This research contributes to fill the gap in the existing literature by introducing and validating a new SQ model specifically for tramp shipping. Meanwhile, the model can also be used by practitioners to receive their customers’ evaluation of their service quality as well as a benchmarking tool for continuous improvement. This study is, however, confined to a small-sized data collected in Singapore and to the bulk commodity context. Further studies on the practicality of the SQ model involving larger sample size and in other regions and for the general and specialized cargoes would be required to enhance its reliability.

  7. Modeling anger and aggressive driving behavior in a dynamic choice-latent variable model.

    Science.gov (United States)

    Danaf, Mazen; Abou-Zeid, Maya; Kaysi, Isam

    2015-02-01

    This paper develops a hybrid choice-latent variable model combined with a Hidden Markov model in order to analyze the causes of aggressive driving and forecast its manifestations accordingly. The model is grounded in the state-trait anger theory; it treats trait driving anger as a latent variable that is expressed as a function of individual characteristics, or as an agent effect, and state anger as a dynamic latent variable that evolves over time and affects driving behavior, and that is expressed as a function of trait anger, frustrating events, and contextual variables (e.g., geometric roadway features, flow conditions, etc.). This model may be used in order to test measures aimed at reducing aggressive driving behavior and improving road safety, and can be incorporated into micro-simulation packages to represent aggressive driving. The paper also presents an application of this model to data obtained from a driving simulator experiment performed at the American University of Beirut. The results derived from this application indicate that state anger at a specific time period is significantly affected by the occurrence of frustrating events, trait anger, and the anger experienced at the previous time period. The proposed model exhibited a better goodness of fit compared to a similar simple joint model where driving behavior and decisions are expressed as a function of the experienced events explicitly and not the dynamic latent variable.

  8. Defining Characteristics of Diagnostic Classification Models and the Problem of Retrofitting in Cognitive Diagnostic Assessment

    Science.gov (United States)

    Gierl, Mark J.; Cui, Ying

    2008-01-01

    One promising application of diagnostic classification models (DCM) is in the area of cognitive diagnostic assessment in education. However, the successful application of DCM in educational testing will likely come with a price--and this price may be in the form of new test development procedures and practices required to yield data that satisfy…

  9. Defining and Comparing the Reading Comprehension Construct: A Cognitive-Psychometric Modeling Approach

    Science.gov (United States)

    Svetina, Dubravka; Gorin, Joanna S.; Tatsuoka, Kikumi K.

    2011-01-01

    As a construct definition, the current study develops a cognitive model describing the knowledge, skills, and abilities measured by critical reading test items on a high-stakes assessment used for selection decisions in the United States. Additionally, in order to establish generalizability of the construct meaning to other similarly structured…

  10. A Process Model for Understanding Adaptation to Sexual Abuse: The Role of Shame in Defining Stigmatization.

    Science.gov (United States)

    Feiring, Candice; And Others

    1996-01-01

    This article presents a theoretical and testable model of psychological processes in child and adolescent victims of sexual abuse. It proposes that sexual abuse leads to shame through mediation of cognitive attributions which leads to poor adjustment. Three factors--social support, gender, and developmental period--are hypothesized to moderate the…

  11. The Demand-Control Model: Specific demands, specific Control, and well-defined groups

    NARCIS (Netherlands)

    Jonge, J. de; Dollard, M.F.; Dormann, C.; Blanc, P.M.; Houtman, I.L.D.

    2000-01-01

    The purpose of this study was to test the Demand-Control Model (DCM), accompanied by three goals. Firstly, we used alternative, more focused, and multifaceted measures of both job demands and job control that are relevant and applicable to today's working contexts. Secondly, this study intended to

  12. Examples of EOS Variables as compared to the UMM-Var Data Model

    Science.gov (United States)

    Cantrell, Simon; Lynnes, Chris

    2016-01-01

    In effort to provide EOSDIS clients a way to discover and use variable data from different providers, a Unified Metadata Model for Variables is being created. This presentation gives an overview of the model and use cases we are handling.

  13. Modelling variability in black hole binaries: linking simulations to observations

    CERN Document Server

    Ingram, Adam

    2011-01-01

    Black hole accretion flows show rapid X-ray variability. The Power Spectral Density (PSD) of this is typically fit by a phenomenological model of multiple Lorentzians for both the broad band noise and Quasi-Periodic Oscillations (QPOs). Our previous paper (Ingram & Done 2011) developed the first physical model for the PSD and fit this to observational data. This was based on the same truncated disc/hot inner flow geometry which can explain the correlated properties of the energy spectra. This assumes that the broad band noise is from propagating fluctuations in mass accretion rate within the hot flow, while the QPO is produced by global Lense-Thirring precession of the same hot flow. Here we develop this model, making some significant improvements. Firstly we specify that the viscous frequency (equivalently, surface density) in the hot flow has the same form as that measured from numerical simulations of precessing, tilted accretion flows. Secondly, we refine the statistical techniques which we use to fit...

  14. Hindered rotor models with variable kinetic functions for accurate thermodynamic and kinetic predictions

    Science.gov (United States)

    Reinisch, Guillaume; Leyssale, Jean-Marc; Vignoles, Gérard L.

    2010-10-01

    We present an extension of some popular hindered rotor (HR) models, namely, the one-dimensional HR (1DHR) and the degenerated two-dimensional HR (d2DHR) models, allowing for a simple and accurate treatment of internal rotations. This extension, based on the use of a variable kinetic function in the Hamiltonian instead of a constant reduced moment of inertia, is extremely suitable in the case of rocking/wagging motions involved in dissociation or atom transfer reactions. The variable kinetic function is first introduced in the framework of a classical 1DHR model. Then, an effective temperature and potential dependent constant is proposed in the cases of quantum 1DHR and classical d2DHR models. These methods are finally applied to the atom transfer reaction SiCl3+BCl3→SiCl4+BCl2. We show, for this particular case, that a proper accounting of internal rotations greatly improves the accuracy of thermodynamic and kinetic predictions. Moreover, our results confirm (i) that using a suitably defined kinetic function appears to be very adapted to such problems; (ii) that the separability assumption of independent rotations seems justified; and (iii) that a quantum mechanical treatment is not a substantial improvement with respect to a classical one.

  15. Modeling high speed growth of large rods of cesium iodide crystals by edge-defined film-fed growth (EFG)

    Science.gov (United States)

    Yeckel, Andrew

    2016-09-01

    A thermocapillary model of edge-defined film-fed growth (EFG) is developed to analyze an experimental system for high speed growth of cesium iodide as a model system for halide scintillator production. The model simulates heat transfer and fluid dynamics in the die, melt, and crystal under conditions of steady growth. Appropriate mass, force, and energy balances are used to compute self-consistent shapes of the growth interface and melt-vapor meniscus. The model is applied to study the effects of growth rate, die geometry, and furnace heat transfer on the limits of system operability. An inverse problem formulation is used to seek operable states at high growth rates by adjusting the overall temperature level and thermal gradient in the furnace. The model predicts that steady growth is feasible at rates greater than 20 mm/h for crystals up to 18 mm in diameter under reasonable furnace gradients.

  16. Defining the Meaning of a Major Modeling and Simulation Change as Applied to Accreditation

    Science.gov (United States)

    2012-12-12

    Spammer not linked to known threat Variant 1b: Spammer associated with organized crime cartels 2 Background: Deep web sites found with...unknown ― deep web ‖ sites (without URLs or DNS services) containing very large numbers of suspiciously uninteresting photographs. Steganographic...the ― deep web ‖ image servers, and the performance of those image servers. Once modified, model will be used to select one of the two download

  17. Latent Variable Models, Cognitive Modelling, and Working Memory: a Meeting Point

    OpenAIRE

    Rodríguez-Villagra, Odir Antonio

    2015-01-01

    Latent variable models and formal cognitive models share some elements of their object of study, variousphilosophical aspects, and some parts of their methodology. Nevertheless, little communication exists between their theories and findings. In order to highlight similarities and differences, this study implemented and tested a formal model proposing that interference among representations is a mechanism limiting working memory capacity (i.e., the interference model of Oberauer & Kliegl, 200...

  18. Pathologic Correlates of Primary Central Nervous System Lymphoma Defined in an Orthotopic Xenograft Model

    Science.gov (United States)

    Kadoch, Cigall; Dinca, Eduard B.; Voicu, Ramona; Chen, Lingjing; Nguyen, Diana; Parikh, Seema; Karrim, Juliana; Shuman, Marc A.; Lowell, Clifford A.; Treseler, Patrick A.; James, C. David; Rubenstein, James L.

    2014-01-01

    Purpose The prospect for advances in the treatment of patients with primary central nervous system lymphoma (PCNSL) is likely dependent on the systematic evaluation of its pathobiology. Animal models of PCNSL are needed to facilitate the analysis of its molecular pathogenesis and for the efficient evaluation of novel therapeutics. Experimental Design We characterized the molecular pathology of CNS lymphoma tumors generated by the intracerebral implantation of Raji B lymphoma cells in athymic mice. Lymphoma cells were modified for bioluminescence imaging to facilitate monitoring of tumor growth and response to therapy. In parallel, we identified molecular features of lymphoma xenograft histopathology that are evident in human PCNSL specimens. Results Intracerebral Raji tumors were determined to faithfully reflect the molecular pathogenesis of PCNSL, including the predominant immunophenotypic state of differentiation of lymphoma cells and their reactive microenvironment. We show the expression of interleukin-4 by Raji and other B lymphoma cell lines in vitro and by Raji tumors in vivo and provide evidence for a role of this cytokine in the M2 polarization of lymphoma macrophages both in the murine model and in diagnostic specimens of human PCNSL. Conclusion Intracerebral implantation of Raji cells results in a reproducible and invasive xenograft model, which recapitulates the histopathology and molecular features of PCNSL, and is suitable for preclinical testing of novel agents. We also show for the first time the feasibility and accuracy of tumor bioluminescence in the monitoring of a highly infiltrative brain tumor. PMID:19276270

  19. Defining the next generation modeling of coastal ecotone dynamics in response to global change

    Science.gov (United States)

    Jiang, Jiang; DeAngelis, Donald L.; Teh, Su-Y; Krauss, Ken W.; Wang, Hongqing; Haidong, Li; Smith, Thomas; Koh, Hock L.

    2016-01-01

    Coastal ecosystems are especially vulnerable to global change; e.g., sea level rise (SLR) and extreme events. Over the past century, global change has resulted in salt-tolerant (halophytic) plant species migrating into upland salt-intolerant (glycophytic) dominated habitats along major rivers and large wetland expanses along the coast. While habitat transitions can be abrupt, modeling the specific drivers of abrupt change between halophytic and glycophytic vegetation is not a simple task. Correlative studies, which dominate the literature, are unlikely to establish ultimate causation for habitat shifts, and do not generate strong predictive capacity for coastal land managers and climate change adaptation exercises. In this paper, we first review possible drivers of ecotone shifts for coastal wetlands, our understanding of which has expanded rapidly in recent years. Any exogenous factor that increases growth or establishment of halophytic species will favor the ecotone boundary moving upslope. However, internal feedbacks between vegetation and the environment, through which vegetation modifies the local microhabitat (e.g., by changing salinity or surface elevation), can either help the system become resilient to future changes or strengthen ecotone migration. Following this idea, we review a succession of models that have provided progressively better insight into the relative importance of internal positive feedbacks versus external environmental factors. We end with developing a theoretical model to show that both abrupt environmental gradients and internal positive feedbacks can generate the sharp ecotonal boundaries that we commonly see, and we demonstrate that the responses to gradual global change (e.g., SLR) can be quite diverse.

  20. Ecosystem Services Modeling as a Tool for Defining Priority Areas for Conservation

    Science.gov (United States)

    Duarte, Gabriela Teixeira; Ribeiro, Milton Cezar; Paglia, Adriano Pereira

    2016-01-01

    Conservationists often have difficulty obtaining financial and social support for protected areas that do not demonstrate their benefits for society. Therefore, ecosystem services have gained importance in conservation science in the last decade, as these services provide further justification for appropriate management and conservation of natural systems. We used InVEST software and a set of GIS procedures to quantify, spatialize and evaluated the overlap between ecosystem services—carbon stock and sediment retention—and a biodiversity proxy–habitat quality. In addition, we proposed a method that serves as an initial approach of a priority areas selection process. The method considers the synergism between ecosystem services and biodiversity conservation. Our study region is the Iron Quadrangle, an important Brazilian mining province and a conservation priority area located in the interface of two biodiversity hotspots, the Cerrado and Atlantic Forest biomes. The resultant priority area for the maintenance of the highest values of ecosystem services and habitat quality was about 13% of the study area. Among those priority areas, 30% are already within established strictly protected areas, and 12% are in sustainable use protected areas. Following the transparent and highly replicable method we proposed in this study, conservation planners can better determine which areas fulfill multiple goals and can locate the trade-offs in the landscape. We also gave a step towards the improvement of the habitat quality model with a topography parameter. In areas of very rugged topography, we have to consider geomorfometric barriers for anthropogenic impacts and for species movement and we must think beyond the linear distances. Moreover, we used a model that considers the tree mortality caused by edge effects in the estimation of carbon stock. We found low spatial congruence among the modeled services, mostly because of the pattern of sediment retention distribution. PMID

  1. Defining New Therapeutics Using a More Immunocompetent Mouse Model of Antibody-Enhanced Dengue Virus Infection

    Science.gov (United States)

    Pinto, Amelia K.; Brien, James D.; Lam, Chia-Ying Kao; Johnson, Syd; Chiang, Cindy; Hiscott, John; Sarathy, Vanessa V.; Barrett, Alan D.; Shresta, Sujan

    2015-01-01

    ABSTRACT With over 3.5 billion people at risk and approximately 390 million human infections per year, dengue virus (DENV) disease strains health care resources worldwide. Previously, we and others established models for DENV pathogenesis in mice that completely lack subunits of the receptors (Ifnar and Ifngr) for type I and type II interferon (IFN) signaling; however, the utility of these models is limited by the pleotropic effect of these cytokines on innate and adaptive immune system development and function. Here, we demonstrate that the specific deletion of Ifnar expression on subsets of murine myeloid cells (LysM Cre+ Ifnarflox/flox [denoted as Ifnarf/f herein]) resulted in enhanced DENV replication in vivo. The administration of subneutralizing amounts of cross-reactive anti-DENV monoclonal antibodies to LysM Cre+ Ifnarf/f mice prior to infection with DENV serotype 2 or 3 resulted in antibody-dependent enhancement (ADE) of infection with many of the characteristics associated with severe DENV disease in humans, including plasma leakage, hypercytokinemia, liver injury, hemoconcentration, and thrombocytopenia. Notably, the pathogenesis of severe DENV-2 or DENV-3 infection in LysM Cre+ Ifnarf/f mice was blocked by pre- or postexposure administration of a bispecific dual-affinity retargeting molecule (DART) or an optimized RIG-I receptor agonist that stimulates innate immune responses. Our findings establish a more immunocompetent animal model of ADE of infection with multiple DENV serotypes in which disease is inhibited by treatment with broad-spectrum antibody derivatives or innate immune stimulatory agents. PMID:26374123

  2. Ecosystem Services Modeling as a Tool for Defining Priority Areas for Conservation.

    Science.gov (United States)

    Duarte, Gabriela Teixeira; Ribeiro, Milton Cezar; Paglia, Adriano Pereira

    2016-01-01

    Conservationists often have difficulty obtaining financial and social support for protected areas that do not demonstrate their benefits for society. Therefore, ecosystem services have gained importance in conservation science in the last decade, as these services provide further justification for appropriate management and conservation of natural systems. We used InVEST software and a set of GIS procedures to quantify, spatialize and evaluated the overlap between ecosystem services-carbon stock and sediment retention-and a biodiversity proxy-habitat quality. In addition, we proposed a method that serves as an initial approach of a priority areas selection process. The method considers the synergism between ecosystem services and biodiversity conservation. Our study region is the Iron Quadrangle, an important Brazilian mining province and a conservation priority area located in the interface of two biodiversity hotspots, the Cerrado and Atlantic Forest biomes. The resultant priority area for the maintenance of the highest values of ecosystem services and habitat quality was about 13% of the study area. Among those priority areas, 30% are already within established strictly protected areas, and 12% are in sustainable use protected areas. Following the transparent and highly replicable method we proposed in this study, conservation planners can better determine which areas fulfill multiple goals and can locate the trade-offs in the landscape. We also gave a step towards the improvement of the habitat quality model with a topography parameter. In areas of very rugged topography, we have to consider geomorfometric barriers for anthropogenic impacts and for species movement and we must think beyond the linear distances. Moreover, we used a model that considers the tree mortality caused by edge effects in the estimation of carbon stock. We found low spatial congruence among the modeled services, mostly because of the pattern of sediment retention distribution.

  3. Ecosystem Services Modeling as a Tool for Defining Priority Areas for Conservation.

    Directory of Open Access Journals (Sweden)

    Gabriela Teixeira Duarte

    Full Text Available Conservationists often have difficulty obtaining financial and social support for protected areas that do not demonstrate their benefits for society. Therefore, ecosystem services have gained importance in conservation science in the last decade, as these services provide further justification for appropriate management and conservation of natural systems. We used InVEST software and a set of GIS procedures to quantify, spatialize and evaluated the overlap between ecosystem services-carbon stock and sediment retention-and a biodiversity proxy-habitat quality. In addition, we proposed a method that serves as an initial approach of a priority areas selection process. The method considers the synergism between ecosystem services and biodiversity conservation. Our study region is the Iron Quadrangle, an important Brazilian mining province and a conservation priority area located in the interface of two biodiversity hotspots, the Cerrado and Atlantic Forest biomes. The resultant priority area for the maintenance of the highest values of ecosystem services and habitat quality was about 13% of the study area. Among those priority areas, 30% are already within established strictly protected areas, and 12% are in sustainable use protected areas. Following the transparent and highly replicable method we proposed in this study, conservation planners can better determine which areas fulfill multiple goals and can locate the trade-offs in the landscape. We also gave a step towards the improvement of the habitat quality model with a topography parameter. In areas of very rugged topography, we have to consider geomorfometric barriers for anthropogenic impacts and for species movement and we must think beyond the linear distances. Moreover, we used a model that considers the tree mortality caused by edge effects in the estimation of carbon stock. We found low spatial congruence among the modeled services, mostly because of the pattern of sediment retention

  4. Testing Three Species Distribution Modelling Strategies to Define Fish Assemblage Reference Conditions for Stream Bioassessment and Related Applications.

    Science.gov (United States)

    Rose, Peter M; Kennard, Mark J; Moffatt, David B; Sheldon, Fran; Butler, Gavin L

    2016-01-01

    Species distribution models are widely used for stream bioassessment, estimating changes in habitat suitability and identifying conservation priorities. We tested the accuracy of three modelling strategies (single species ensemble, multi-species response and community classification models) to predict fish assemblages at reference stream segments in coastal subtropical Australia. We aimed to evaluate each modelling strategy for consistency of predictor variable selection; determine which strategy is most suitable for stream bioassessment using fish indicators; and appraise which strategies best match other stream management applications. Five models, one single species ensemble, two multi-species response and two community classification models, were calibrated using fish species presence-absence data from 103 reference sites. Models were evaluated for generality and transferability through space and time using four external reference site datasets. Elevation and catchment slope were consistently identified as key correlates of fish assemblage composition among models. The community classification models had high omission error rates and contributed fewer taxa to the 'expected' component of the taxonomic completeness (O/E50) index than the other strategies. This potentially decreases the model sensitivity for site impact assessment. The ensemble model accurately and precisely modelled O/E50 for the training data, but produced biased predictions for the external datasets. The multi-species response models afforded relatively high accuracy and precision coupled with low bias across external datasets and had lower taxa omission rates than the community classification models. They inherently included rare, but predictable species while excluding species that were poorly modelled among all strategies. We suggest that the multi-species response modelling strategy is most suited to bioassessment using freshwater fish assemblages in our study area. At the species level

  5. Define Project

    DEFF Research Database (Denmark)

    Munk-Madsen, Andreas

    2005-01-01

    "Project" is a key concept in IS management. The word is frequently used in textbooks and standards. Yet we seldom find a precise definition of the concept. This paper discusses how to define the concept of a project. The proposed definition covers both heavily formalized projects and informally...... organized, agile projects. Based on the proposed definition popular existing definitions are discussed....

  6. Characterizing climate predictability and model response variability from multiple initial condition and multi-model ensembles

    CERN Document Server

    Kumar, Devashish

    2016-01-01

    Climate models are thought to solve boundary value problems unlike numerical weather prediction, which is an initial value problem. However, climate internal variability (CIV) is thought to be relatively important at near-term (0-30 year) prediction horizons, especially at higher resolutions. The recent availability of significant numbers of multi-model (MME) and multi-initial condition (MICE) ensembles allows for the first time a direct sensitivity analysis of CIV versus model response variability (MRV). Understanding the relative agreement and variability of MME and MICE ensembles for multiple regions, resolutions, and projection horizons is critical for focusing model improvements, diagnostics, and prognosis, as well as impacts, adaptation, and vulnerability studies. Here we find that CIV (MICE agreement) is lower (higher) than MRV (MME agreement) across all spatial resolutions and projection time horizons for both temperature and precipitation. However, CIV dominates MRV over higher latitudes generally an...

  7. Regression mixture models : Does modeling the covariance between independent variables and latent classes improve the results?

    NARCIS (Netherlands)

    Lamont, A.E.; Vermunt, J.K.; Van Horn, M.L.

    2016-01-01

    Regression mixture models are increasingly used as an exploratory approach to identify heterogeneity in the effects of a predictor on an outcome. In this simulation study, we tested the effects of violating an implicit assumption often made in these models; that is, independent variables in the

  8. Elastodynamic shape modeler: a tool for defining the deformation behavior of virtual tissues.

    Science.gov (United States)

    Radetzky, A; Nürnberger, A; Pretschner, D P

    2000-01-01

    A main goal of surgical simulators is the creation of virtual training environments for prospective surgeons. Thus, students can rehearse the various steps of surgical procedures on a computer system without any risk to the patient. One main condition for realistic training is the simulated interaction with virtual medical devices, such as endoscopic instruments. In particular, the virtual deformation and transection of tissues are important. For this application, a neuro-fuzzy model has been developed, which allows the description of the visual and haptic deformation behavior of the simulated tissue by means of expert knowledge in the form of medical terms. Pathologic conditions affecting the visual and haptic tissue response can be easily changed by a medical specialist without mathematical knowledge. By using the personal computer-based program Elastodynamic Shape Modeler, these conditions can be adjusted via a graphical user interface. With a force feedback device, which is similar to a real laparoscopic instrument, virtual deformations can be performed and the resulting haptic feedback can be felt. Thus, use of neuro-fuzzy technologies for the definition and calculation of virtual deformations seems applicable to the simulation of surgical interventions in virtual environments.

  9. Defining immunological impact and therapeutic benefit of mild heating in a murine model of arthritis.

    Directory of Open Access Journals (Sweden)

    Chen-Ting Lee

    Full Text Available Traditional treatments, including a variety of thermal therapies have been known since ancient times to provide relief from rheumatoid arthritis (RA symptoms. However, a general absence of information on how heating affects molecular or immunological targets relevant to RA has limited heat treatment (HT to the category of treatments known as "alternative therapies". In this study, we evaluated the effectiveness of mild HT in a collagen-induced arthritis (CIA model which has been used in many previous studies to evaluate newer pharmacological approaches for the treatment of RA, and tested whether inflammatory immune activity was altered. We also compared the effect of HT to methotrexate, a well characterized pharmacological treatment for RA. CIA mice were treated with either a single HT for several hours or daily 30 minute HT. Disease progression and macrophage infiltration were evaluated. We found that both HT regimens significantly reduced arthritis disease severity and macrophage infiltration into inflamed joints. Surprisingly, HT was as efficient as methotrexate in controlling disease progression. At the molecular level, HT suppressed TNF-α while increasing production of IL-10. We also observed an induction of HSP70 and a reduction in both NF-κB and HIF-1α in inflamed tissues. Additionally, using activated macrophages in vitro, we found that HT reduced production of pro-inflammatory cytokines, an effect which is correlated to induction of HSF-1 and HSP70 and inhibition of NF-κB and STAT activation. Our findings demonstrate a significant therapeutic benefit of HT in controlling arthritis progression in a clinically relevant mouse model, with an efficacy similar to methotrexate. Mechanistically, HT targets highly relevant anti-inflammatory pathways which strongly support its increased study for use in clinical trials for RA.

  10. Separation of variables for integrable spin-boson models

    Energy Technology Data Exchange (ETDEWEB)

    Amico, Luigi, E-mail: lamico@dmfci.unict.i [CNR-IMM MATIS and Dipartimento di Metodologie Fisiche e Chimiche (DMFCI), Universita di Catania, viale A. Doria 6, I-95125 Catania (Italy); Frahm, Holger, E-mail: frahm@itp.uni-hannover.d [Institut fuer Theoretische Physik, Leibniz Universitaet Hannover, Appelstr. 2, D-30167 Hannover (Germany); Osterloh, Andreas, E-mail: andreas.osterloh@uni-due.d [Fakultaet fuer Physik, Universitaet Duisburg-Essen, Campus Duisburg, Lotharstr. 1, D-47048 Duisburg (Germany)] [Institut fuer Theoretische Physik, Leibniz Universitaet Hannover, Appelstr. 2, D-30167 Hannover (Germany); Wirth, Tobias, E-mail: tobias.wirth@itp.uni-hannover.d [Institut fuer Theoretische Physik, Leibniz Universitaet Hannover, Appelstr. 2, D-30167 Hannover (Germany)

    2010-11-11

    We formulate the functional Bethe ansatz for bosonic (infinite dimensional) representations of the Yang-Baxter algebra. The main deviation from the standard approach consists in a half infinite Sklyanin lattice made of the eigenvalues of the operator zeros of the Bethe annihilation operator. By a separation of variables, functional TQ-equations are obtained for this half infinite lattice. They provide valuable information about the spectrum of a given Hamiltonian model. We apply this procedure to integrable spin-boson models subject to both twisted and open boundary conditions. In the case of general twisted and certain open boundary conditions polynomial solutions to these TQ-equations are found and we compute the spectrum of both the full transfer matrix and its quasi-classical limit. For generic open boundaries we present a two-parameter family of Bethe equations, derived from TQ-equations that are compatible with polynomial solutions for Q. A connection of these parameters to the boundary fields is still missing.

  11. Modeling and performance analysis for composite network–compute service provisioning in software-defined cloud environments

    Directory of Open Access Journals (Sweden)

    Qiang Duan

    2015-08-01

    Full Text Available The crucial role of networking in Cloud computing calls for a holistic vision of both networking and computing systems that leads to composite network–compute service provisioning. Software-Defined Network (SDN is a fundamental advancement in networking that enables network programmability. SDN and software-defined compute/storage systems form a Software-Defined Cloud Environment (SDCE that may greatly facilitate composite network–compute service provisioning to Cloud users. Therefore, networking and computing systems need to be modeled and analyzed as composite service provisioning systems in order to obtain thorough understanding about service performance in SDCEs. In this paper, a novel approach for modeling composite network–compute service capabilities and a technique for evaluating composite network–compute service performance are developed. The analytic method proposed in this paper is general and agnostic to service implementation technologies; thus is applicable to a wide variety of network–compute services in SDCEs. The results obtained in this paper provide useful guidelines for federated control and management of networking and computing resources to achieve Cloud service performance guarantees.

  12. Error model identification of inertial navigation platform based on errors-in-variables model

    Institute of Scientific and Technical Information of China (English)

    Liu Ming; Liu Yu; Su Baoku

    2009-01-01

    Because the real input acceleration cannot be obtained during the error model identification of inertial navigation platform, both the input and output data contain noises. In this case, the conventional regression model and the least squares (LS) method will result in bias. Based on the models of inertial navigation platform error and observation error, the errors-in-variables (EV) model and the total least squares (TLS) method are proposed to identify the error model of the inertial navigation platform. The estimation precision is improved and the result is better than the conventional regression model based LS method. The simulation results illustrate the effectiveness of the proposed method.

  13. PHT3D-UZF: A reactive transport model for variably-saturated porous media

    Science.gov (United States)

    Wu, Ming Zhi; Post, Vincent E. A.; Salmon, S. Ursula; Morway, Eric; Prommer, H.

    2016-01-01

    A modified version of the MODFLOW/MT3DMS-based reactive transport model PHT3D was developed to extend current reactive transport capabilities to the variably-saturated component of the subsurface system and incorporate diffusive reactive transport of gaseous species. Referred to as PHT3D-UZF, this code incorporates flux terms calculated by MODFLOW's unsaturated-zone flow (UZF1) package. A volume-averaged approach similar to the method used in UZF-MT3DMS was adopted. The PHREEQC-based computation of chemical processes within PHT3D-UZF in combination with the analytical solution method of UZF1 allows for comprehensive reactive transport investigations (i.e., biogeochemical transformations) that jointly involve saturated and unsaturated zone processes. Intended for regional-scale applications, UZF1 simulates downward-only flux within the unsaturated zone. The model was tested by comparing simulation results with those of existing numerical models. The comparison was performed for several benchmark problems that cover a range of important hydrological and reactive transport processes. A 2D simulation scenario was defined to illustrate the geochemical evolution following dewatering in a sandy acid sulfate soil environment. Other potential applications include the simulation of biogeochemical processes in variably-saturated systems that track the transport and fate of agricultural pollutants, nutrients, natural and xenobiotic organic compounds and micropollutants such as pharmaceuticals, as well as the evolution of isotope patterns.

  14. Chemical Atmosphere-Snow-Sea Ice Interactions: defining future research in the field, lab and modeling

    Science.gov (United States)

    Frey, Markus

    2015-04-01

    The air-snow-sea ice system plays an important role in the global cycling of nitrogen, halogens, trace metals or carbon, including greenhouse gases (e.g. CO2 air-sea flux), and therefore influences also climate. Its impact on atmospheric composition is illustrated for example by dramatic ozone and mercury depletion events which occur within or close to the sea ice zone (SIZ) mostly during polar spring and are catalysed by halogens released from SIZ ice, snow or aerosol. Recent field campaigns in the high Arctic (e.g. BROMEX, OASIS) and Antarctic (Weddell sea cruises) highlight the importance of snow on sea ice as a chemical reservoir and reactor, even during polar night. However, many processes, participating chemical species and their interactions are still poorly understood and/or lack any representation in current models. Furthermore, recent lab studies provide a lot of detail on the chemical environment and processes but need to be integrated much better to improve our understanding of a rapidly changing natural environment. During a 3-day workshop held in Cambridge/UK in October 2013 more than 60 scientists from 15 countries who work on the physics, chemistry or biology of the atmosphere-snow-sea ice system discussed research status and challenges, which need to be addressed in the near future. In this presentation I will give a summary of the main research questions identified during this workshop as well as ways forward to answer them through a community-based interdisciplinary approach.

  15. Definable deduction relation

    Institute of Scientific and Technical Information of China (English)

    张玉平

    1999-01-01

    The nonmonotonic deduction relation in default reasoning is defined with fixed point style, which has the many-extension property that classical logic is not possessed of. These two kinds of deductions both have boolean definability property, that is, their extensions or deductive closures can be defined by boolean formulas. A generalized form of fixed point method is employed to define a class of deduction relations, which all have the above property. Theorems on definability and atomless boolean algebras in model theory are essential in dealing with this assertion.

  16. Yang-Lee zeros of the two- and three-state Potts model defined on phi3 Feynman diagrams.

    Science.gov (United States)

    de Albuquerque, Luiz C; Dalmazi, D

    2003-06-01

    We present both analytical and numerical results on the position of partition function zeros on the complex magnetic field plane of the q=2 state (Ising) and the q=3 state Potts model defined on phi(3) Feynman diagrams (thin random graphs). Our analytic results are based on the ideas of destructive interference of coexisting phases and low temperature expansions. For the case of the Ising model, an argument based on a symmetry of the saddle point equations leads us to a nonperturbative proof that the Yang-Lee zeros are located on the unit circle, although no circle theorem is known in this case of random graphs. For the q=3 state Potts model, our perturbative results indicate that the Yang-Lee zeros lie outside the unit circle. Both analytic results are confirmed by finite lattice numerical calculations.

  17. Modeling Variable Phanerozoic Oxygen Effects on Physiology and Evolution.

    Science.gov (United States)

    Graham, Jeffrey B; Jew, Corey J; Wegner, Nicholas C

    2016-01-01

    Geochemical approximation of Earth's atmospheric O2 level over geologic time prompts hypotheses linking hyper- and hypoxic atmospheres to transformative events in the evolutionary history of the biosphere. Such correlations, however, remain problematic due to the relative imprecision of the timing and scope of oxygen change and the looseness of its overlay on the chronology of key biotic events such as radiations, evolutionary innovation, and extinctions. There are nevertheless general attributions of atmospheric oxygen concentration to key evolutionary changes among groups having a primary dependence upon oxygen diffusion for respiration. These include the occurrence of Devonian hypoxia and the accentuation of air-breathing dependence leading to the origin of vertebrate terrestriality, the occurrence of Carboniferous-Permian hyperoxia and the major radiation of early tetrapods and the origins of insect flight and gigantism, and the Mid-Late Permian oxygen decline accompanying the Permian extinction. However, because of variability between and error within different atmospheric models, there is little basis for postulating correlations outside the Late Paleozoic. Other problems arising in the correlation of paleo-oxygen with significant biological events include tendencies to ignore the role of blood pigment affinity modulation in maintaining homeostasis, the slow rates of O2 change that would have allowed for adaptation, and significant respiratory and circulatory modifications that can and do occur without changes in atmospheric oxygen. The purpose of this paper is thus to refocus thinking about basic questions central to the biological and physiological implications of O2 change over geological time.

  18. VAM2D: Variably saturated analysis model in two dimensions

    Energy Technology Data Exchange (ETDEWEB)

    Huyakorn, P.S.; Kool, J.B.; Wu, Y.S. (HydroGeoLogic, Inc., Herndon, VA (United States))

    1991-10-01

    This report documents a two-dimensional finite element model, VAM2D, developed to simulate water flow and solute transport in variably saturated porous media. Both flow and transport simulation can be handled concurrently or sequentially. The formulation of the governing equations and the numerical procedures used in the code are presented. The flow equation is approximated using the Galerkin finite element method. Nonlinear soil moisture characteristics and atmospheric boundary conditions (e.g., infiltration, evaporation and seepage face), are treated using Picard and Newton-Raphson iterations. Hysteresis effects and anisotropy in the unsaturated hydraulic conductivity can be taken into account if needed. The contaminant transport simulation can account for advection, hydrodynamic dispersion, linear equilibrium sorption, and first-order degradation. Transport of a single component or a multi-component decay chain can be handled. The transport equation is approximated using an upstream weighted residual method. Several test problems are presented to verify the code and demonstrate its utility. These problems range from simple one-dimensional to complex two-dimensional and axisymmetric problems. This document has been produced as a user's manual. It contains detailed information on the code structure along with instructions for input data preparation and sample input and printed output for selected test problems. Also included are instructions for job set up and restarting procedures. 44 refs., 54 figs., 24 tabs.

  19. Variable Neighborhood Simplex Search Methods for Global Optimization Models

    Directory of Open Access Journals (Sweden)

    Pongchanun Luangpaiboon

    2012-01-01

    Full Text Available Problem statement: Many optimization problems of practical interest are encountered in various fields of chemical, engineering and management sciences. They are computationally intractable. Therefore, a practical algorithm for solving such problems is to employ approximation algorithms that can find nearly optimums within a reasonable amount of computational time. Approach: In this study the hybrid methods combining the Variable Neighborhood Search (VNS and simplex’s family methods are proposed to deal with the global optimization problems of noisy continuous functions including constrained models. Basically, the simplex methods offer a search scheme without the gradient information whereas the VNS has the better searching ability with a systematic change of neighborhood of the current solution within a local search. Results: The VNS modified simplex method has a better searching ability for optimization problems with noise. The VNS modified simplex method also outperforms in average on the characteristics of intensity and diversity during the evolution of design point moving stage for the constrained optimization. Conclusion: The adaptive hybrid versions have proved to obtain significantly better results than the conventional methods. The amount of computation effort required for successful optimization is very sensitive to the rate of noise decrease of the process yields. Under circumstances of constrained optimization and gradually increasing the noise during an optimization the most preferred approach is the VNS modified simplex method.

  20. Effects of temporal variability on HBV model calibration

    Directory of Open Access Journals (Sweden)

    Steven Reinaldo Rusli

    2015-10-01

    Full Text Available This study aimed to investigate the effect of temporal variability on the optimization of the Hydrologiska Byråns Vattenbalansavedlning (HBV model, as well as the calibration performance using manual optimization and average parameter values. By applying the HBV model to the Jiangwan Catchment, whose geological features include lots of cracks and gaps, simulations under various schemes were developed: short, medium-length, and long temporal calibrations. The results show that, with long temporal calibration, the objective function values of the Nash-Sutcliffe efficiency coefficient (NSE, relative error (RE, root mean square error (RMSE, and high flow ratio generally deliver a preferable simulation. Although NSE and RMSE are relatively stable with different temporal scales, significant improvements to RE and the high flow ratio are seen with longer temporal calibration. It is also noted that use of average parameter values does not lead to better simulation results compared with manual optimization. With medium-length temporal calibration, manual optimization delivers the best simulation results, with NSE, RE, RMSE, and the high flow ratio being 0.563 6, 0.122 3, 0.978 8, and 0.854 7, respectively; and calibration using average parameter values delivers NSE, RE, RMSE, and the high flow ratio of 0.481 1, 0.467 6, 1.021 0, and 2.784 0, respectively. Similar behavior is found with long temporal calibration, when NSE, RE, RMSE, and the high flow ratio using manual optimization are 0.525 3, −0.069 2, 1.058 0, and 0.980 0, respectively, as compared with 0.490 3, 0.224 8, 1.096 2, and 0.547 9, respectively, using average parameter values. This study shows that selection of longer periods of temporal calibration in hydrological analysis delivers better simulation in general for water balance analysis.

  1. Mathematical Modelling of High-speed Ribbbon Systems: a Case Study of Edge-defined Film-fed Growth

    Science.gov (United States)

    Ettouney, H. M.; Brown, R. A.

    1984-01-01

    Finite element numerical analysis was used to solve the coupled problem of heat transfer and capillarity to describe low and high speed silicon sheet growth in meniscus defined systems. Heat transfer models which neglect the details of convective heat flow in the melt are used to establish operating limits for an EFG system in terms of the growth rate, die temperature and the static head acting on the meniscus. It is shown that convective heat transfer in the melt becomes important only at high growth rates or for materials with low thermal conductivities.

  2. Modeling variability and uncertainty associated with inhaled weapons-grade PuO2.

    Science.gov (United States)

    Aden, James; Scott, Bobby R

    2003-06-01

    The work presented relates to developing a stochastic version of the ICRP 66 respiratory tract deposition model and applying the stochastic model to characterize the variability/uncertainty associated with inhaled PuO2 for a hypothetical population of nuclear workers engaged in light work-related exercise. The parameter uncertainty/variability distributions used are essentially the same as the FORTRAN-based stochastic deposition model of Bolch et al. known as LUDUC (LUng Dose Uncertainty Code). Based on Crystal Ball software, this stochastic deposition model includes particle polydispersity, which Bolch et al. did not discuss. This paper first compares model-simulated regional deposition probability distributions to deterministic results based on LUDEP (LUng Dose Evaluation Program) software, which implements the ICRP 66 deterministic deposition model. For these comparisons, a particle density of 3 g cm(-3) (for hypothetical radioactive particles) was used. The range of possible depositions generated by LUDUC and the Crystal Ball program results revealed LUDEP's limitations. Even though LUDEP tends to use parameters that represent average parameter values for adult males, it overestimates deposition in the lower regions of the lung for most of the population. The Crystal Ball program was then used to generate radioactivity intake distributions for single and multiple PuO2 particle intakes by a hypothetical population of nuclear workers for the stochastic intake (STI) paradigm. These distributions of radioactivity intake are evaluated for the five primary regions of the respiratory tract as defined in the ICRP Publication 66. The results reveal that when a particle has been deposited, the radioactivity is likely to be low if it is in the lower regions (< 10 Bq for the bb and AI regions), but it may be quite large in the upper regions (as much as 600 Bq for the ET1, and ET2 regions), and the distributions for radioactivity become less and less skewed to the right, as

  3. Marginal Maximum Likelihood Estimation of a Latent Variable Model with Interaction

    Science.gov (United States)

    Cudeck, Robert; Harring, Jeffrey R.; du Toit, Stephen H. C.

    2009-01-01

    There has been considerable interest in nonlinear latent variable models specifying interaction between latent variables. Although it seems to be only slightly more complex than linear regression without the interaction, the model that includes a product of latent variables cannot be estimated by maximum likelihood assuming normality.…

  4. Stratified flows with variable density: mathematical modelling and numerical challenges.

    Science.gov (United States)

    Murillo, Javier; Navas-Montilla, Adrian

    2017-04-01

    Stratified flows appear in a wide variety of fundamental problems in hydrological and geophysical sciences. They may involve from hyperconcentrated floods carrying sediment causing collapse, landslides and debris flows, to suspended material in turbidity currents where turbulence is a key process. Also, in stratified flows variable horizontal density is present. Depending on the case, density varies according to the volumetric concentration of different components or species that can represent transported or suspended materials or soluble substances. Multilayer approaches based on the shallow water equations provide suitable models but are not free from difficulties when moving to the numerical resolution of the governing equations. Considering the variety of temporal and spatial scales, transfer of mass and energy among layers may strongly differ from one case to another. As a consequence, in order to provide accurate solutions, very high order methods of proved quality are demanded. Under these complex scenarios it is necessary to observe that the numerical solution provides the expected order of accuracy but also converges to the physically based solution, which is not an easy task. To this purpose, this work will focus in the use of Energy balanced augmented solvers, in particular, the Augmented Roe Flux ADER scheme. References: J. Murillo , P. García-Navarro, Wave Riemann description of friction terms in unsteady shallow flows: Application to water and mud/debris floods. J. Comput. Phys. 231 (2012) 1963-2001. J. Murillo B. Latorre, P. García-Navarro. A Riemann solver for unsteady computation of 2D shallow flows with variable density. J. Comput. Phys.231 (2012) 4775-4807. A. Navas-Montilla, J. Murillo, Energy balanced numerical schemes with very high order. The Augmented Roe Flux ADER scheme. Application to the shallow water equations, J. Comput. Phys. 290 (2015) 188-218. A. Navas-Montilla, J. Murillo, Asymptotically and exactly energy balanced augmented flux

  5. Estimating net present value variability for deterministic models

    NARCIS (Netherlands)

    van Groenendaal, W.J.H.

    1995-01-01

    For decision makers the variability in the net present value (NPV) of an investment project is an indication of the project's risk. So-called risk analysis is one way to estimate this variability. However, risk analysis requires knowledge about the stochastic character of the inputs. For large, long

  6. Application of soft computing based hybrid models in hydrological variables modeling: a comprehensive review

    Science.gov (United States)

    Fahimi, Farzad; Yaseen, Zaher Mundher; El-shafie, Ahmed

    2017-05-01

    Since the middle of the twentieth century, artificial intelligence (AI) models have been used widely in engineering and science problems. Water resource variable modeling and prediction are the most challenging issues in water engineering. Artificial neural network (ANN) is a common approach used to tackle this problem by using viable and efficient models. Numerous ANN models have been successfully developed to achieve more accurate results. In the current review, different ANN models in water resource applications and hydrological variable predictions are reviewed and outlined. In addition, recent hybrid models and their structures, input preprocessing, and optimization techniques are discussed and the results are compared with similar previous studies. Moreover, to achieve a comprehensive view of the literature, many articles that applied ANN models together with other techniques are included. Consequently, coupling procedure, model evaluation, and performance comparison of hybrid models with conventional ANN models are assessed, as well as, taxonomy and hybrid ANN models structures. Finally, current challenges and recommendations for future researches are indicated and new hybrid approaches are proposed.

  7. Application of soft computing based hybrid models in hydrological variables modeling: a comprehensive review

    Science.gov (United States)

    Fahimi, Farzad; Yaseen, Zaher Mundher; El-shafie, Ahmed

    2016-02-01

    Since the middle of the twentieth century, artificial intelligence (AI) models have been used widely in engineering and science problems. Water resource variable modeling and prediction are the most challenging issues in water engineering. Artificial neural network (ANN) is a common approach used to tackle this problem by using viable and efficient models. Numerous ANN models have been successfully developed to achieve more accurate results. In the current review, different ANN models in water resource applications and hydrological variable predictions are reviewed and outlined. In addition, recent hybrid models and their structures, input preprocessing, and optimization techniques are discussed and the results are compared with similar previous studies. Moreover, to achieve a comprehensive view of the literature, many articles that applied ANN models together with other techniques are included. Consequently, coupling procedure, model evaluation, and performance comparison of hybrid models with conventional ANN models are assessed, as well as, taxonomy and hybrid ANN models structures. Finally, current challenges and recommendations for future researches are indicated and new hybrid approaches are proposed.

  8. Multi-asset Black-Scholes model as a variable second class constrained dynamical system

    Science.gov (United States)

    Bustamante, M.; Contreras, M.

    2016-09-01

    In this paper, we study the multi-asset Black-Scholes model from a structural point of view. For this, we interpret the multi-asset Black-Scholes equation as a multidimensional Schrödinger one particle equation. The analysis of the classical Hamiltonian and Lagrangian mechanics associated with this quantum model implies that, in this system, the canonical momentums cannot always be written in terms of the velocities. This feature is a typical characteristic of the constrained system that appears in the high-energy physics. To study this model in the proper form, one must apply Dirac's method for constrained systems. The results of the Dirac's analysis indicate that in the correlation parameters space of the multi-assets model, there exists a surface (called the Kummer surface ΣK, where the determinant of the correlation matrix is null) on which the constraint number can vary. We study in detail the cases with N = 2 and N = 3 assets. For these cases, we calculate the propagator of the multi-asset Black-Scholes equation and show that inside the Kummer ΣK surface the propagator is well defined, but outside ΣK the propagator diverges and the option price is not well defined. On ΣK the propagator is obtained as a constrained path integral and their form depends on which region of the Kummer surface the correlation parameters lie. Thus, the multi-asset Black-Scholes model is an example of a variable constrained dynamical system, and it is a new and beautiful property that had not been previously observed.

  9. Dynamical Opacity-Sampling Models of Mira Variables. I: Modelling Description and Analysis of Approximations

    CERN Document Server

    Ireland, M J; Wood, P R

    2008-01-01

    We describe the Cool Opacity-sampling Dynamic EXtended (CODEX) atmosphere models of Mira variable stars, and examine in detail the physical and numerical approximations that go in to the model creation. The CODEX atmospheric models are obtained by computing the temperature and the chemical and radiative states of the atmospheric layers, assuming gas pressure and velocity profiles from Mira pulsation models, which extend from near the H-burning shell to the outer layers of the atmosphere. Although the code uses the approximation of Local Thermodynamic Equilibrium (LTE) and a grey approximation in the dynamical atmosphere code, many key observable quantities, such as infrared diameters and low-resolution spectra, are predicted robustly in spite of these approximations. We show that in visible light, radiation from Mira variables is dominated by fluorescence scattering processes, and that the LTE approximation likely under-predicts visible-band fluxes by a factor of two.

  10. Method of Running Sines: Modeling Variability in Long-Period Variables

    CERN Document Server

    Andronov, Ivan L

    2013-01-01

    We review one of complementary methods for time series analysis - the method of "Running Sines". "Crash tests" of the method include signals with a large period variation and with a large trend. The method is most effective for "nearly periodic" signals, which exhibit "wavy shape" with a "cycle length" varying within few dozen per cent (i.e. oscillations of low coherence). This is a typical case for brightness variations of long-period pulsating variables and resembles QPO (Quasi-Periodic Oscillations) and TPO (Transient Periodic Oscillations) in interacting binary stars - cataclysmic variables, symbiotic variables, low-mass X-Ray binaries etc. General theory of "running approximations" was described by Andronov (1997A &AS..125..207A), one of realizations of which is the method of "running sines". The method is related to Morlet-type wavelet analysis improved for irregularly spaced data (Andronov, 1998KFNT...14..490A, 1999sss..conf...57A), as well as to a classical "running mean" (="moving average"). The ...

  11. A meta-model analysis of a finite element simulation for defining poroelastic properties of intervertebral discs.

    Science.gov (United States)

    Nikkhoo, Mohammad; Hsu, Yu-Chun; Haghpanahi, Mohammad; Parnianpour, Mohamad; Wang, Jaw-Lin

    2013-06-01

    Finite element analysis is an effective tool to evaluate the material properties of living tissue. For an interactive optimization procedure, the finite element analysis usually needs many simulations to reach a reasonable solution. The meta-model analysis of finite element simulation can be used to reduce the computation of a structure with complex geometry or a material with composite constitutive equations. The intervertebral disc is a complex, heterogeneous, and hydrated porous structure. A poroelastic finite element model can be used to observe the fluid transferring, pressure deviation, and other properties within the disc. Defining reasonable poroelastic material properties of the anulus fibrosus and nucleus pulposus is critical for the quality of the simulation. We developed a material property updating protocol, which is basically a fitting algorithm consisted of finite element simulations and a quadratic response surface regression. This protocol was used to find the material properties, such as the hydraulic permeability, elastic modulus, and Poisson's ratio, of intact and degenerated porcine discs. The results showed that the in vitro disc experimental deformations were well fitted with limited finite element simulations and a quadratic response surface regression. The comparison of material properties of intact and degenerated discs showed that the hydraulic permeability significantly decreased but Poisson's ratio significantly increased for the degenerated discs. This study shows that the developed protocol is efficient and effective in defining material properties of a complex structure such as the intervertebral disc.

  12. Defining social inclusion of people with intellectual and developmental disabilities: an ecological model of social networks and community participation.

    Science.gov (United States)

    Simplican, Stacy Clifford; Leader, Geraldine; Kosciulek, John; Leahy, Michael

    2015-03-01

    Social inclusion is an important goal for people with intellectual and developmental disabilities, families, service providers, and policymakers; however, the concept of social inclusion remains unclear, largely due to multiple and conflicting definitions in research and policy. We define social inclusion as the interaction between two major life domains: interpersonal relationships and community participation. We then propose an ecological model of social inclusion that includes individual, interpersonal, organizational, community, and socio-political factors. We identify four areas of research that our ecological model of social inclusion can move forward: (1) organizational implementation of social inclusion; (2) social inclusion of people with intellectual and developmental disabilities living with their families, (3) social inclusion of people along a broader spectrum of disability, and (4) the potential role of self-advocacy organizations in promoting social inclusion.

  13. VARIABLE SELECTION BY PSEUDO WAVELETS IN HETEROSCEDASTIC REGRESSION MODELS INVOLVING TIME SERIES

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    A simple but efficient method has been proposed to select variables in heteroscedastic regression models. It is shown that the pseudo empirical wavelet coefficients corresponding to the significant explanatory variables in the regression models are clearly larger than those nonsignificant ones, on the basis of which a procedure is developed to select variables in regression models. The coefficients of the models are also estimated. All estimators are proved to be consistent.

  14. Tropical Intraseasonal Variability in 14 IPCC AR4 Climate Models Part I: Convective Signals

    Energy Technology Data Exchange (ETDEWEB)

    Lin, J; Kiladis, G N; Mapes, B E; Weickmann, K M; Sperber, K R; Lin, W; Wheeler, M; Schubert, S D; Genio, A D; Donner, L J; Emori, S; Gueremy, J; Hourdin, F; Rasch, P J; Roeckner, E; Scinocca, J F

    2005-05-06

    This study evaluates the tropical intraseasonal variability, especially the fidelity of Madden-Julian Oscillation (MJO) simulations, in 14 coupled general circulation models (GCMs) participating in the Inter-governmental Panel on Climate Change (IPCC) Fourth Assessment Report (AR4). Eight years of daily precipitation from each model's 20th century climate simulation are analyzed and compared with daily satellite retrieved precipitation. Space-time spectral analysis is used to obtain the variance and phase speed of dominant convectively coupled equatorial waves, including the MJO, Kelvin, equatorial Rossby (ER), mixed Rossby-gravity (MRG), and eastward inertio-gravity (EIG) and westward inertio-gravity (WIG) waves. The variance and propagation of the MJO, defined as the eastward wavenumbers 1-6, 30-70 day mode, are examined in detail. The results show that current state-of-the-art GCMs still have significant problems and display a wide range of skill in simulating the tropical intraseasonal variability. The total intraseasonal (2-128 day) variance of precipitation is too weak in most of the models. About half of the models have signals of convectively coupled equatorial waves, with Kelvin and MRG-EIG waves especially prominent. However, the variances are generally too weak for all wave modes except the EIG wave, and the phase speeds are generally too fast, being scaled to excessively deep equivalent depths. An interesting result is that this scaling is consistent within a given model across modes, in that both the symmetric and antisymmetric modes scale similarly to a certain equivalent depth. Excessively deep equivalent depths suggest that these models may not have a large enough reduction in their ''effective static stability'' due to diabatic heating. The MJO variance approaches the observed value in only two of the 14 models, but is less than half of the observed value in the other 12 models. The ratio between the eastward MJO variance

  15. Approaches for modeling within subject variability in pharmacometric count data analysis: dynamic inter-occasion variability and stochastic differential equations.

    Science.gov (United States)

    Deng, Chenhui; Plan, Elodie L; Karlsson, Mats O

    2016-06-01

    Parameter variation in pharmacometric analysis studies can be characterized as within subject parameter variability (WSV) in pharmacometric models. WSV has previously been successfully modeled using inter-occasion variability (IOV), but also stochastic differential equations (SDEs). In this study, two approaches, dynamic inter-occasion variability (dIOV) and adapted stochastic differential equations, were proposed to investigate WSV in pharmacometric count data analysis. These approaches were applied to published count models for seizure counts and Likert pain scores. Both approaches improved the model fits significantly. In addition, stochastic simulation and estimation were used to explore further the capability of the two approaches to diagnose and improve models where existing WSV is not recognized. The results of simulations confirmed the gain in introducing WSV as dIOV and SDEs when parameters vary randomly over time. Further, the approaches were also informative as diagnostics of model misspecification, when parameters changed systematically over time but this was not recognized in the structural model. The proposed approaches in this study offer strategies to characterize WSV and are not restricted to count data.

  16. Discrete Element Modeling of Asphalt Concrete Cracking Using a User-defined Tlree-dimensional Micromechanical Approach

    Institute of Scientific and Technical Information of China (English)

    CHEN Jun; PAN Tongyan; HUANG Xiaoming

    2011-01-01

    We established a user-defined micromechanical model using discrete element method (DEM) to investigate the cracking behavior of asphalt concrete (AC).Using the “Fish” language provided in the particle flow code in 3-Demensions (PFC3D),the air voids and mastics in asphalt concrete were realistically built as two distinct phases.With the irregular shape of individual aggregate particles modeled using a clump of spheres of different sizes,the three-dimensional (3D) discrete element model was able to account for aggregate gradation and fraction.Laboratory uniaxial complex modulus test and indirect tensile strength test were performed to obtain input material parameters for the numerical simulation.A set of the indirect tensile test were simulated to study the cracking behavior of AC at two levels of temperature,i e,-10 ℃ and 15 ℃.The predicted results of the numerical simulation were compared with laboratory experimental measurements.Results show that the 3D DEM model is able to predict accurately the fracture pattern of different asphalt mixtures.Based on the DEM model,the effects of air void content and aggregate volumetric fraction on the cracking behavior of asphalt concrete were evaluated.

  17. A Model of the Dynamic Error as a Measurement Result of Instruments Defining the Parameters of Moving Objects

    Science.gov (United States)

    Dichev, D.; Koev, H.; Bakalova, T.; Louda, P.

    2014-08-01

    The present paper considers a new model for the formation of the dynamic error inertial component. It is very effective in the analysis and synthesis of measuring instruments positioned on moving objects and measuring their movement parameters. The block diagram developed within this paper is used as a basis for defining the mathematical model. The block diagram is based on the set-theoretic description of the measuring system, its input and output quantities and the process of dynamic error formation. The model reflects the specific nature of the formation of the dynamic error inertial component. In addition, the model submits to the logical interrelation and sequence of the physical processes that form it. The effectiveness, usefulness and advantages of the model proposed are rooted in the wide range of possibilities it provides in relation to the analysis and synthesis of those measuring instruments, the formulation of algorithms and optimization criteria, as well as the development of new intelligent measuring systems with improved accuracy characteristics in dynamic mode.

  18. Selecting candidate predictor variables for the modelling of post ...

    African Journals Online (AJOL)

    more formal methods such as focus group discussions, questionnaires and ... statistics, methodology, epidemiology, computer engineering and infectious dis- eases. .... ed on their lack of knowledge of wealth scoring tools. Variables exhibiting ...

  19. How many general and inflammatory variables need to be fulfilled when defining sepsis due to the 2003 SCCM/ESICM/ACCP/ATS/SIS definitions in critically ill surgical patients: a retrospective observational study

    Directory of Open Access Journals (Sweden)

    Nass Maximilian

    2010-12-01

    Full Text Available Abstract Background It has never been specified how many of the extended general and inflammatory variables of the 2003 SCCM/ESICM/ACCP/ATS/SIS consensus sepsis definitions are mandatory to define sepsis. Objectives To find out how many of these variables are needed to identify almost all patients with septic shock. Methods Retrospective observational single-centre study in postoperative/posttraumatic patients admitted to an University adult ICU. The survey looked at 1355 admissions, from 01/2007 to 12/2008, that were monitored daily computer-assisted for the eight general and inflammatory variables temperature, heart rate, respiratory rate, significant edema, positive fluid balance, hyperglycemia, white blood cell count and C-reactive protein. A total of 507 patients with infections were classified based on the first day with the highest diagnostic category of sepsis during their stay using a cut-off of 1/8 variables compared with the corresponding classification based on a cut-off of 2, 3, 4, 5, 6, 7 or 8/8 variables. Results Applying cut-offs of 1/8 up to 8/8 variables resulted in a decreased detection rate of cases with septic shock, i.e., from 106, 105, 103, 93, 65, 21, 3 to 0. The mortality rate increased up to a cut-off of 6/8 variables, i.e., 31% (33/106, 31% (33/105, 31% (32/103, 32% (30/93, 38% (25/65, 43% (9/21, 33% (1/3 and 0% (0/0. Conclusions Frequencies and mortality rates of diagnostic categories of sepsis differ depending on the cut-off for general and inflammatory variables. A cut-off of 3/8 variables is needed to identify almost all patients with septic shock who may benefit from optimal treatment.

  20. Exact solutions to a nonlinear dispersive model with variable coefficients

    Energy Technology Data Exchange (ETDEWEB)

    Yin Jun [Department of Applied Mathematics, Southwestern University of Finance and Economics, Chengdu 610074 (China); Lai Shaoyong [Department of Applied Mathematics, Southwestern University of Finance and Economics, Chengdu 610074 (China)], E-mail: laishaoy@swufe.edu.cn; Qing Yin [Department of Applied Mathematics, Southwestern University of Finance and Economics, Chengdu 610074 (China)

    2009-05-15

    A mathematical technique based on an auxiliary differential equation and the symbolic computation system Maple is employed to investigate a prototypical and nonlinear K(n, n) equation with variable coefficients. The exact solutions to the equation are constructed analytically under various circumstances. It is shown that the variable coefficients and the exponent appearing in the equation determine the quantitative change in the physical structures of the solutions.

  1. Modeling inter-subject variability in fMRI activation location: A Bayesian hierarchical spatial model

    Science.gov (United States)

    Xu, Lei; Johnson, Timothy D.; Nichols, Thomas E.; Nee, Derek E.

    2010-01-01

    Summary The aim of this work is to develop a spatial model for multi-subject fMRI data. There has been extensive work on univariate modeling of each voxel for single and multi-subject data, some work on spatial modeling of single-subject data, and some recent work on spatial modeling of multi-subject data. However, there has been no work on spatial models that explicitly account for inter-subject variability in activation locations. In this work, we use the idea of activation centers and model the inter-subject variability in activation locations directly. Our model is specified in a Bayesian hierarchical frame work which allows us to draw inferences at all levels: the population level, the individual level and the voxel level. We use Gaussian mixtures for the probability that an individual has a particular activation. This helps answer an important question which is not addressed by any of the previous methods: What proportion of subjects had a significant activity in a given region. Our approach incorporates the unknown number of mixture components into the model as a parameter whose posterior distribution is estimated by reversible jump Markov Chain Monte Carlo. We demonstrate our method with a fMRI study of resolving proactive interference and show dramatically better precision of localization with our method relative to the standard mass-univariate method. Although we are motivated by fMRI data, this model could easily be modified to handle other types of imaging data. PMID:19210732

  2. Modeling and designing of variable-period and variable-pole-number undulator

    Directory of Open Access Journals (Sweden)

    I. Davidyuk

    2016-02-01

    Full Text Available The concept of permanent-magnet variable-period undulator (VPU was proposed several years ago and has found few implementations so far. The VPUs have some advantages as compared with conventional undulators, e.g., a wider range of radiation wavelength tuning and the option to increase the number of poles for shorter periods. Both these advantages will be realized in the VPU under development now at Budker INP. In this paper, we present the results of 2D and 3D magnetic field simulations and discuss some design features of this VPU.

  3. Defining excellence.

    Science.gov (United States)

    Mehl, B

    1993-05-01

    Excellence in the pharmacy profession, particularly pharmacy management, is defined. Several factors have a significant effect on the ability to reach a given level of excellence. The first is the economic and political climate in which pharmacists practice. Stricter controls, reduced resources, and the velocity of change all necessitate nurturing of values and a work ethic to maintain excellence. Excellence must be measured by the services provided with regard to the resources available; thus, the ability to achieve excellence is a true test of leadership and innovation. Excellence is also time dependent, and today's innovation becomes tomorrow's standard. Programs that raise the level of patient care, not those that aggrandize the profession, are the most important. In addition, basic services must be practiced at a level of excellence. Quality assessment is a way to improve care and bring medical treatment to a higher plane of excellence. For such assessment to be effective and not punitive, the philosophy of the program must be known, and the goal must be clear. Excellence in practice is dependent on factors such as political and social norms, standards of practice, available resources; perceptions, time, the motivation to progress to a higher level, and the continuous innovation required to reshape the profession to meet the needs of society.

  4. Exploring Factor Model Parameters across Continuous Variables with Local Structural Equation Models.

    Science.gov (United States)

    Hildebrandt, Andrea; Lüdtke, Oliver; Robitzsch, Alexander; Sommer, Christopher; Wilhelm, Oliver

    2016-01-01

    Using an empirical data set, we investigated variation in factor model parameters across a continuous moderator variable and demonstrated three modeling approaches: multiple-group mean and covariance structure (MGMCS) analyses, local structural equation modeling (LSEM), and moderated factor analysis (MFA). We focused on how to study variation in factor model parameters as a function of continuous variables such as age, socioeconomic status, ability levels, acculturation, and so forth. Specifically, we formalized the LSEM approach in detail as compared with previous work and investigated its statistical properties with an analytical derivation and a simulation study. We also provide code for the easy implementation of LSEM. The illustration of methods was based on cross-sectional cognitive ability data from individuals ranging in age from 4 to 23 years. Variations in factor loadings across age were examined with regard to the age differentiation hypothesis. LSEM and MFA converged with respect to the conclusions. When there was a broad age range within groups and varying relations between the indicator variables and the common factor across age, MGMCS produced distorted parameter estimates. We discuss the pros of LSEM compared with MFA and recommend using the two tools as complementary approaches for investigating moderation in factor model parameters.

  5. Modelling sensorial and nutritional changes to better define quality and shelf life of fresh-cut melons

    Directory of Open Access Journals (Sweden)

    Maria Luisa Amodio

    2013-06-01

    Full Text Available The shelf life of fresh-cut produce is mostly determined by evaluating the external appearance since this is the major factor affecting consumer choice at the moment of purchase. The aim of this study was to investigate the degradation kinetics of the major quality attributes in order to better define the shelf life of fresh-cut melons. Melon pieces were stored for eight days in air at 5°C. Sensorial and physical attributes including colour, external appearance, aroma, translucency, firmness, and chemical constituents, such as soluble solids, fructose, vitamin C, and phenolic content, along with antioxidant activity were monitored. Attributes showing significant changes over time were used to test conventional kinetic models of zero and first order, and Weibullian models. The Weibullian model was the most accurate to describe changes in appearance score, translucency, aroma, firmness and vitamin C (with a regression coefficient always higher than 0.956, while the other parameters could not be predicted with such accuracy by any of the tested models. Vitamin C showed the lowest kinetic rate among the model parameters, even though at the limit of marketability (appearance score 3, estimated at five days, a loss of 37% of its initial content was observed compared to the fresh-cut product, indicating a much lower nutritional value. After five days, the aroma score was already 2.2, suggesting that this quality attribute, together with the vitamin C content, should be taken into account when assessing shelf life of fresh-cut melons. In addition, logistical models were used to fit the percentage of rejected samples on the basis of non-marketability and non-edibility (appearance score <3 and <2, respectively. For both parameters, correlations higher than 0.999 were found at P<0.0001; for each mean score this model helps to understand the distribution of the samples among marketable, nonmarketable, and non-edible products.

  6. What is culture in «cultural economy»? Defining culture to create measurable models in cultural economy

    Directory of Open Access Journals (Sweden)

    Aníbal Monasterio Astobiza

    2017-07-01

    Full Text Available The idea of culture is somewhat vague and ambiguous for the formal goals of economics. The aim of this paper is to define the notion of culture better so as to help build economic explanations based on culture and therefore to measure its impact in every activity or beliefs associated with culture. To define culture according to the canonical evolutionary definition, it is any kind of ritualised behaviour that becomes meaningful for a group and that remains more or less constant and is transmitted down through the generations. Economic institutions are founded, implicitly or explicitly, on a worldview of how humans function; culture is an essential part of understanding us as humans, making it necessary to describe what we understand by culture correctly. In this paper we review the literature on evolutionary anthropology and psychology dealing with the concept of culture to warn that economic modelling ignores intangible benefits of culture rendering economics unable to measure certain cultural items in the digital consumer society.

  7. Forecasting Macroeconomic Variables using Neural Network Models and Three Automated Model Selection Techniques

    DEFF Research Database (Denmark)

    Kock, Anders Bredahl; Teräsvirta, Timo

    In this paper we consider the forecasting performance of a well-defined class of flexible models, the so-called single hidden-layer feedforward neural network models. A major aim of our study is to find out whether they, due to their flexibility, are as useful tools in economic forecasting as some...... previous studies have indicated. When forecasting with neural network models one faces several problems, all of which influence the accuracy of the forecasts. First, neural networks are often hard to estimate due to their highly nonlinear structure. In fact, their parameters are not even globally...... on the linearisation idea: the Marginal Bridge Estimator and Autometrics. Second, one must decide whether forecasting should be carried out recursively or directly. Comparisons of these two methodss exist for linear models and here these comparisons are extended to neural networks. Finally, a nonlinear model...

  8. The Model Checking Problem for Propositional Intuitionistic Logic with One Variable is AC1-Complete

    CERN Document Server

    Weiss, Martin Mundhenk And Felix

    2010-01-01

    We investigate the complexity of the model checking problem for propositional intuitionistic logic. We show that the model checking problem for intuitionistic logic with one variable is complete for logspace-uniform AC1, and for intuitionistic logic with two variables it is P-complete. For superintuitionistic logics with one variable, we obtain NC1-completeness for the model checking problem and for the tautology problem.

  9. A Variable Refrigerant Flow Heat Pump Computer Model in EnergyPlus

    Energy Technology Data Exchange (ETDEWEB)

    Raustad, Richard A. [Florida Solar Energy Center

    2013-01-01

    This paper provides an overview of the variable refrigerant flow heat pump computer model included with the Department of Energy's EnergyPlusTM whole-building energy simulation software. The mathematical model for a variable refrigerant flow heat pump operating in cooling or heating mode, and a detailed model for the variable refrigerant flow direct-expansion (DX) cooling coil are described in detail.

  10. Generalized Models: An Application to Identify Environmental Variables That Significantly Affect the Abundance of Three Tree Species

    Directory of Open Access Journals (Sweden)

    Pablo Antúnez

    2017-02-01

    Full Text Available In defining the environmental preferences of plant species, statistical models are part of the essential tools in the field of modern ecology. However, conventional linear models require compliance with some parametric assumptions and if these requirements are not met, imply a serious limitation of the applied model. In this study, the effectiveness of linear and nonlinear generalized models was examined to identify the unitary effect of the principal environmental variables on the abundance of three tree species growing in the natural temperate forests of Oaxaca, Mexico. The covariates that showed a significant effect on the distribution of tree species were the maximum and minimum temperatures and the precipitation during specific periods. Results suggest that the generalized models, particularly smoothed models, were able to detect the increase or decrease of the abundance against changes in an environmental variable; they also revealed the inflection of the regression. In addition, these models allow partial characterization of the realized niche of a given species according to some specific variables, regardless of the type of relationship.

  11. On Fitting Nonlinear Latent Curve Models to Multiple Variables Measured Longitudinally

    Science.gov (United States)

    Blozis, Shelley A.

    2007-01-01

    This article shows how nonlinear latent curve models may be fitted for simultaneous analysis of multiple variables measured longitudinally using Mx statistical software. Longitudinal studies often involve observation of several variables across time with interest in the associations between change characteristics of different variables measured…

  12. Empirical Likelihood Based Variable Selection for Varying Coefficient Partially Linear Models with Censored Data

    Institute of Scientific and Technical Information of China (English)

    Peixin ZHAO

    2013-01-01

    In this paper,we consider the variable selection for the parametric components of varying coefficient partially linear models with censored data.By constructing a penalized auxiliary vector ingeniously,we propose an empirical likelihood based variable selection procedure,and show that it is consistent and satisfies the sparsity.The simulation studies show that the proposed variable selection method is workable.

  13. A quantitative method for defining high-arched palate using the Tcof1(+/-) mutant mouse as a model.

    Science.gov (United States)

    Conley, Zachary R; Hague, Molly; Kurosaka, Hiroshi; Dixon, Jill; Dixon, Michael J; Trainor, Paul A

    2016-07-15

    The palate functions as the roof of the mouth in mammals, separating the oral and nasal cavities. Its complex embryonic development and assembly poses unique susceptibilities to intrinsic and extrinsic disruptions. Such disruptions may cause failure of the developing palatal shelves to fuse along the midline resulting in a cleft. In other cases the palate may fuse at an arch, resulting in a vaulted oral cavity, termed high-arched palate. There are many models available for studying the pathogenesis of cleft palate but a relative paucity for high-arched palate. One condition exhibiting either cleft palate or high-arched palate is Treacher Collins syndrome, a congenital disorder characterized by numerous craniofacial anomalies. We quantitatively analyzed palatal perturbations in the Tcof1(+/-) mouse model of Treacher Collins syndrome, which phenocopies the condition in humans. We discovered that 46% of Tcof1(+/-) mutant embryos and new born pups exhibit either soft clefts or full clefts. In addition, 17% of Tcof1(+/-) mutants were found to exhibit high-arched palate, defined as two sigma above the corresponding wild-type population mean for height and angular based arch measurements. Furthermore, palatal shelf length and shelf width were decreased in all Tcof1(+/-) mutant embryos and pups compared to controls. Interestingly, these phenotypes were subsequently ameliorated through genetic inhibition of p53. The results of our study therefore provide a simple, reproducible and quantitative method for investigating models of high-arched palate.

  14. Field and Model Study to Define Baseline Conditions of Beached Oil Tar Balls along Florida’s First Coast

    Directory of Open Access Journals (Sweden)

    Peter Bacopoulos

    2014-03-01

    Full Text Available Anecdotal data are currently the best data available to describe baseline conditions of beached oil tar balls on Florida’s First Coast beaches. This study combines field methods and numerical modeling to define a data-driven knowledge base of oil tar ball baseline conditions. Outcomes from the field study include an established methodology for field data collection and laboratory testing of beached oil tar balls, spatial maps of collected samples and analysis of the data as to transport/wash-up trends. Archives of the electronic data, including GPS locations and other informational tags, and collected samples are presented, as are the physical and chemical analyses of the collected samples. The thrust of the physical and chemical analyses is to differentiate the collected samples into highly suspect oil tar balls versus false/non-oil tar ball samples. The numerical modeling involves two-dimensional hydrodynamic simulations of astronomic tides. Results from the numerical modeling include velocity residuals that show ebb-dominated residual currents exiting the inlet via an offshore, counter-rotating dual-eddy system. The tidally derived residual currents are used as one explanation for the observed transport trends. The study concludes that the port activity in the St. Johns River is not majorly contributing to the baseline conditions of oil tar ball wash-up on Florida’s First Coast beaches.

  15. Modelling and Multi-Variable Control of Refrigeration Systems

    DEFF Research Database (Denmark)

    Larsen, Lars Finn Slot; Holm, J. R.

    2003-01-01

    In this paper a dynamic model of a 1:1 refrigeration system is presented. The main modelling effort has been concentrated on a lumped parameter model of a shell and tube condenser. The model has shown good resemblance with experimental data from a test rig, regarding as well the static as the dyn......In this paper a dynamic model of a 1:1 refrigeration system is presented. The main modelling effort has been concentrated on a lumped parameter model of a shell and tube condenser. The model has shown good resemblance with experimental data from a test rig, regarding as well the static...

  16. Data for comparison of climate envelope models developed using expert-selected variables versus statistical selection

    Science.gov (United States)

    Brandt, Laura A.; Benscoter, Allison; Harvey, Rebecca G.; Speroterra, Carolina; Bucklin, David N.; Romanach, Stephanie; Watling, James I.; Mazzotti, Frank J.

    2017-01-01

    The data we used for this study include species occurrence data (n=15 species), climate data and predictions, an expert opinion questionnaire, and species masks that represented the model domain for each species. For this data release, we include the results of the expert opinion questionnaire and the species model domains (or masks). We developed an expert opinion questionnaire to gather information on expert opinion regarding the importance of climate variables in determining a species geographic range. The species masks, or model domains, were defined separately for each species using a variation of the “target-group” approach (Phillips et al. 2009), where the domain was determined using convex polygons including occurrence data for at least three phylogenetically related and similar species (Watling et al. 2012). The species occurrence data, climate data, and climate predictions are freely available online, and therefore not included in this data release. The species occurrence data were obtained from the online database Global Biodiversity Information Facility (GBIF; http://www.gbif.org/), and from scientific literature (Watling et al. 2011). Climate data were obtained from the WorldClim database (Hijmans et al. 2005) and climate predictions were obtained from the Center for Ocean-Atmosphere Prediction Studies (COAPS) at Florida State University (https://floridaclimateinstitute.org/resources/data-sets/regional-downscaling). See metadata for references.

  17. A Method for Evaluation of Model-Generated Vertical Profiles of Meteorological Variables

    Science.gov (United States)

    2016-03-01

    layers. The summary sheet is set up so that standard statistics, i.e., mean value (M), mean absolute error (MAE), standard deviation (SD), and RMSE...Difference (MAD), Standard Deviation (SD), and Root Mean Square Difference (RMSD) for All Variables for Each of the 4 Methods 33 List of Symbols...computation of an integrated or weighted mean of some variable X for a layer defined by height (Z), upward arrow, or the natural log of pressure, ln(P

  18. Model Criticism of Bayesian Networks with Latent Variables.

    Science.gov (United States)

    Williamson, David M.; Mislevy, Robert J.; Almond, Russell G.

    This study investigated statistical methods for identifying errors in Bayesian networks (BN) with latent variables, as found in intelligent cognitive assessments. BN, commonly used in artificial intelligence systems, are promising mechanisms for scoring constructed-response examinations. The success of an intelligent assessment or tutoring system…

  19. Quantifying strain variability in modeling growth of Listeria monocytogenes

    NARCIS (Netherlands)

    Aryani, D.; Besten, den H.M.W.; Hazeleger, W.C.; Zwietering, M.H.

    2015-01-01

    Prediction of microbial growth kinetics can differ from the actual behavior of the target microorganisms. In the present study, the impact of strain variability on maximum specific growth rate (µmax) (h- 1) was quantified using twenty Listeria monocytogenes strains. The µmax was determined as functi

  20. A Latent-Variable Causal Model of Faculty Reputational Ratings.

    Science.gov (United States)

    King, Suzanne; Wolfle, Lee M.

    A reanalysis was conducted of Saunier's research (1985) on sources of variation in the National Research Council (NRC) reputational ratings of university faculty. Saunier conducted a stepwise regression analysis using 12 predictor variables. Due to problems with multicollinearity and because of the atheoretical nature of stepwise regression,…

  1. Oracle Efficient Variable Selection in Random and Fixed Effects Panel Data Models

    DEFF Research Database (Denmark)

    Kock, Anders Bredahl

    , we prove that the Marginal Bridge estimator can asymptotically correctly distinguish between relevant and irrelevant explanatory variables. We do this without restricting the dependence between covariates and without assuming sub Gaussianity of the error terms thereby generalizing the results...... and irrelevant variables and the asymptotic distribution of the estimators of the coefficients of the relevant variables is the same as if only these had been included in the model, i.e. as if an oracle had revealed the true model prior to estimation. In the case of more explanatory variables than observations...... of Huang et al. (2008). Furthermore, the number of relevant variables is allowed to be larger than the sample size....

  2. Modelling the global tropospheric ozone budget: exploring the variability in current models

    Directory of Open Access Journals (Sweden)

    O. Wild

    2007-02-01

    Full Text Available What are the largest uncertainties in modelling ozone in the troposphere, and how do they affect the calculated ozone budget? Published chemistry-transport model studies of tropospheric ozone differ significantly in their conclusions regarding the importance of the key processes controlling the ozone budget: influx from the stratosphere, chemical processing and surface deposition. This study surveys ozone budgets from previous studies and demonstrates that about two thirds of the increase in ozone production seen between early assessments and more recent model intercomparisons can be accounted for by increased precursor emissions. Model studies using recent estimates of emissions compare better with ozonesonde measurements than studies using older data, and the tropospheric burden of ozone is closer to that derived here from measurement climatologies, 335±10 Tg. However, differences between individual model studies remain large and cannot be explained by surface precursor emissions alone; cross-tropopause transport, wet and dry deposition, humidity, and lightning make large contributions to the differences seen between models. The importance of these processes is examined here using a chemistry-transport model to investigate the sensitivity of the calculated ozone budget to different assumptions about emissions, physical processes, meteorology and model resolution. The budget is particularly sensitive to the magnitude and location of lightning NOx emissions, which remain poorly constrained; the 3–8 TgN/yr range in recent model studies may account for a 10% difference in tropospheric ozone burden and a 1.4 year difference in CH4 lifetime. Differences in humidity and dry deposition account for some of the variability in ozone abundance and loss seen in previous studies, with smaller contributions from wet deposition and stratospheric influx. At coarse model resolutions stratospheric influx is systematically overestimated

  3. A Novel Information-Theoretic Approach for Variable Clustering and Predictive Modeling Using Dirichlet Process Mixtures

    OpenAIRE

    Yun Chen; Hui Yang

    2016-01-01

    In the era of big data, there are increasing interests on clustering variables for the minimization of data redundancy and the maximization of variable relevancy. Existing clustering methods, however, depend on nontrivial assumptions about the data structure. Note that nonlinear interdependence among variables poses significant challenges on the traditional framework of predictive modeling. In the present work, we reformulate the problem of variable clustering from an information theoretic pe...

  4. Defining optimal DEM resolutions and point densities for modelling hydrologically sensitive areas in agricultural catchments dominated by microtopography

    Science.gov (United States)

    Thomas, I. A.; Jordan, P.; Shine, O.; Fenton, O.; Mellander, P.-E.; Dunlop, P.; Murphy, P. N. C.

    2017-02-01

    Defining critical source areas (CSAs) of diffuse pollution in agricultural catchments depends upon the accurate delineation of hydrologically sensitive areas (HSAs) at highest risk of generating surface runoff pathways. In topographically complex landscapes, this delineation is constrained by digital elevation model (DEM) resolution and the influence of microtopographic features. To address this, optimal DEM resolutions and point densities for spatially modelling HSAs were investigated, for onward use in delineating CSAs. The surface runoff framework was modelled using the Topographic Wetness Index (TWI) and maps were derived from 0.25 m LiDAR DEMs (40 bare-earth points m-2), resampled 1 m and 2 m LiDAR DEMs, and a radar generated 5 m DEM. Furthermore, the resampled 1 m and 2 m LiDAR DEMs were regenerated with reduced bare-earth point densities (5, 2, 1, 0.5, 0.25 and 0.125 points m-2) to analyse effects on elevation accuracy and important microtopographic features. Results were compared to surface runoff field observations in two 10 km2 agricultural catchments for evaluation. Analysis showed that the accuracy of modelled HSAs using different thresholds (5%, 10% and 15% of the catchment area with the highest TWI values) was much higher using LiDAR data compared to the 5 m DEM (70-100% and 10-84%, respectively). This was attributed to the DEM capturing microtopographic features such as hedgerow banks, roads, tramlines and open agricultural drains, which acted as topographic barriers or channels that diverted runoff away from the hillslope scale flow direction. Furthermore, the identification of 'breakthrough' and 'delivery' points along runoff pathways where runoff and mobilised pollutants could be potentially transported between fields or delivered to the drainage channel network was much higher using LiDAR data compared to the 5 m DEM (75-100% and 0-100%, respectively). Optimal DEM resolutions of 1-2 m were identified for modelling HSAs, which balanced the need

  5. Signature-tagging of a bacterial isolate demonstrates phenotypic variability of the progeny in vivo in the absence of defined mutations

    Science.gov (United States)

    Whitby, Paul W.; VanWagoner, Timothy M.; Morton, Daniel J.; Seale, Thomas W.; Springer, Jennifer M.; Hempel, Randy J.; Stull, Terrence L.

    2012-01-01

    Awareness of the high degree of redundancy that occurs in several nutrient uptake pathways of H. influenzae led us to attempt to develop a quantitative STM method that could identify both null mutants and mutants with decreased fitness that remain viable in vivo. To accomplish this task we designed a modified STM approach that utilized a set of signature tagged wild-type (STWT) strains (in a single genetic background) as carriers for mutations in genes of interest located elsewhere in the genome. Each STWT strain differed from the others by insertion of a unique, Q-PCR-detectable, seven base pair tag into the same redundant gene locus. Initially ten STWTs were created and characterized in vitro and in vivo. As anticipated, the STWT strains were not significantly different in their in vitro growth. However, in the chinchilla model of otitis media, certain STWTs outgrew others by several orders of magnitude in mixed infections. Removal of the predominant STWT resulted in its replacement by a different predominant STWT on retesting. Unexpectedly we observed that the STWT exhibiting the greatest proliferation was animal dependent. These findings identify an inherent inability of the signature tag methodologies to accurately elucidate fitness in this animal model of infection and underscore the subtleties of H. influenzae gene regulation. PMID:23085534

  6. "Galaxy," Defined

    CERN Document Server

    Willman, Beth

    2012-01-01

    A growing number of low luminosity and low surface brightness astronomical objects challenge traditional notions of both galaxies and star clusters. To address this challenge, we propose a definition of galaxy that does not depend on a cold dark matter model of the universe: A galaxy is a gravitationally bound collection of stars whose properties cannot be explained by a combination of baryons and Newton's laws of gravity. We use this definition to critically examine the classification of ultra-faint dwarfs, globular clusters, ultra-compact dwarfs, and tidal dwarfs. While kinematic studies provide an effective diagnostic of the definition in many regimes, they can be less useful for compact or very faint systems. To explore the utility of using the [Fe/H] spread as a diagnostic, we use published spectroscopic [Fe/H] measurements of 16 Milky Way dwarfs and 24 globular clusters to uniformly calculate their [Fe/H] spreads and associated uncertainties. Our principal results are: (i) no known, old star cluster wit...

  7. Realistic MHD Modelling of Cataclysmic Variable Spin-Down

    Science.gov (United States)

    Lascelles, Alex; Garraffo, Cecilia; Drake, Jeremy J.; Cohen, Ofer

    2017-01-01

    The orbital evolution of cataclysmic variables with periods above the "period gap" (>3 hrs) is governed by angular momentum loss via the magnetized wind of the unevolved secondary star. The usual prescription to study such systems takes into account only the magnetic field of the secondary and assumes its field is dipolar. It has been shown that introduction of the white dwarf and its magnetic field can significantly impact the wind’s structure, leading to a change in angular momentum loss rate and evolutionary timescale by an order of magnitude. Furthermore, the complexity of the magnetic field can drastically alter stellar spin-down rates. We explore the effects of orbital separation and magnetic field configuration on mass and angular momentum loss rates through 3-D magnetohydrodynamic simulations. We present the results of a study of cataclysmic variable orbital evolution including these new ingredients.

  8. A Novel Information-Theoretic Approach for Variable Clustering and Predictive Modeling Using Dirichlet Process Mixtures

    Science.gov (United States)

    Chen, Yun; Yang, Hui

    2016-12-01

    In the era of big data, there are increasing interests on clustering variables for the minimization of data redundancy and the maximization of variable relevancy. Existing clustering methods, however, depend on nontrivial assumptions about the data structure. Note that nonlinear interdependence among variables poses significant challenges on the traditional framework of predictive modeling. In the present work, we reformulate the problem of variable clustering from an information theoretic perspective that does not require the assumption of data structure for the identification of nonlinear interdependence among variables. Specifically, we propose the use of mutual information to characterize and measure nonlinear correlation structures among variables. Further, we develop Dirichlet process (DP) models to cluster variables based on the mutual-information measures among variables. Finally, orthonormalized variables in each cluster are integrated with group elastic-net model to improve the performance of predictive modeling. Both simulation and real-world case studies showed that the proposed methodology not only effectively reveals the nonlinear interdependence structures among variables but also outperforms traditional variable clustering algorithms such as hierarchical clustering.

  9. A Novel Information-Theoretic Approach for Variable Clustering and Predictive Modeling Using Dirichlet Process Mixtures.

    Science.gov (United States)

    Chen, Yun; Yang, Hui

    2016-12-14

    In the era of big data, there are increasing interests on clustering variables for the minimization of data redundancy and the maximization of variable relevancy. Existing clustering methods, however, depend on nontrivial assumptions about the data structure. Note that nonlinear interdependence among variables poses significant challenges on the traditional framework of predictive modeling. In the present work, we reformulate the problem of variable clustering from an information theoretic perspective that does not require the assumption of data structure for the identification of nonlinear interdependence among variables. Specifically, we propose the use of mutual information to characterize and measure nonlinear correlation structures among variables. Further, we develop Dirichlet process (DP) models to cluster variables based on the mutual-information measures among variables. Finally, orthonormalized variables in each cluster are integrated with group elastic-net model to improve the performance of predictive modeling. Both simulation and real-world case studies showed that the proposed methodology not only effectively reveals the nonlinear interdependence structures among variables but also outperforms traditional variable clustering algorithms such as hierarchical clustering.

  10. Defining the effect of sweep tillage tool cutting edge geometry on tillage forces using 3D discrete element modelling

    Directory of Open Access Journals (Sweden)

    Mustafa Ucgul

    2015-09-01

    Full Text Available The energy required for tillage processes accounts for a significant proportion of total energy used in crop production. In many tillage processes decreasing the draft and upward vertical forces is often desired for reduced fuel use and improved penetration, respectively. Recent studies have proved that the discrete element modelling (DEM can effectively be used to model the soil–tool interaction. In his study, Fielke (1994 [1] examined the effect of the various tool cutting edge geometries, namely; cutting edge height, length of underside rub, angle of underside clearance, on draft and vertical forces. In this paper the experimental parameters of Fielke (1994 [1] were simulated using 3D discrete element modelling techniques. In the simulations a hysteretic spring contact model integrated with a linear cohesion model that considers the plastic deformation behaviour of the soil hence provides better vertical force prediction was employed. DEM parameters were determined by comparing the experimental and simulation results of angle of repose and penetration tests. The results of the study showed that the simulation results of the soil-various tool cutting edge geometries agreed well with the experimental results of Fielke (1994 [1]. The modelling was then used to simulate a further range of cutting edge geometries to better define the effect of sweep tool cutting edge geometry parameters on tillage forces. The extra simulations were able to show that by using a sharper cutting edge with zero vertical cutting edge height the draft and upward vertical force were further reduced indicating there is benefit from having a really sharp cutting edge. The extra simulations also confirmed that the interpolated trends for angle of underside clearance as suggested by Fielke (1994 [1] where correct with a linear reduction in draft and upward vertical force for angle of underside clearance between the ranges of −25 and −5°, and between −5 and 0°. The

  11. Variable Selection for Generalized Varying Coefficient Partially Linear Models with Diverging Number of Parameters

    Institute of Scientific and Technical Information of China (English)

    Zheng-yan Lin; Yu-ze Yuan

    2012-01-01

    Semiparametric models with diverging number of predictors arise in many contemporary scientific areas. Variable selection for these models consists of two components: model selection for non-parametric components and selection of significant variables for the parametric portion.In this paper,we consider a variable selection procedure by combining basis function approximation with SCAD penalty.The proposed procedure simultaneously selects significant variables in the parametric components and the nonparametric components.With appropriate selection of tuning parameters,we establish the consistency and sparseness of this procedure.

  12. A hydrochemical modelling framework for combined assessment of spatial and temporal variability in stream chemistry: application to Plynlimon, Wales

    Directory of Open Access Journals (Sweden)

    H.J. Foster

    2001-01-01

    Full Text Available Recent concern about the risk to biota from acidification in upland areas, due to air pollution and land-use change (such as the planting of coniferous forests, has generated a need to model catchment hydro-chemistry to assess environmental risk and define protection strategies. Previous approaches have tended to concentrate on quantifying either spatial variability at a regional scale or temporal variability at a given location. However, to protect biota from ‘acid episodes’, an assessment of both temporal and spatial variability of stream chemistry is required at a catchment scale. In addition, quantification of temporal variability needs to represent both episodic event response and long term variability caused by deposition and/or land-use change. Both spatial and temporal variability in streamwater chemistry are considered in a new modelling methodology based on application to the Plynlimon catchments, central Wales. A two-component End-Member Mixing Analysis (EMMA is used whereby low and high flow chemistry are taken to represent ‘groundwater’ and ‘soil water’ end-members. The conventional EMMA method is extended to incorporate spatial variability in the two end-members across the catchments by quantifying the Acid Neutralisation Capacity (ANC of each in terms of a statistical distribution. These are then input as stochastic variables to a two-component mixing model, thereby accounting for variability of ANC both spatially and temporally. The model is coupled to a long-term acidification model (MAGIC to predict the evolution of the end members and, hence, the response to future scenarios. The results can be plotted as a function of time and space, which enables better assessment of the likely effects of pollution deposition or land-use changes in the future on the stream chemistry than current methods which use catchment average values. The model is also a useful basis for further research into linkage between hydrochemistry

  13. Modeling urban expansion by using variable weights logistic cellular automata

    NARCIS (Netherlands)

    Shu, Bangrong; Bakker, Martha M.; Zhang, Honghui; Li, Yongle; Qin, Wei; Carsjens, Gerrit J.

    2017-01-01

    Simulation models based on cellular automata (CA) are widely used for understanding and simulating complex urban expansion process. Among these models, logistic CA (LCA) is commonly adopted. However, the performance of LCA models is often limited because the fixed coefficients obtained from binary

  14. Assessing the accuracy and stability of variable selection methods for random forest modeling in ecology.

    Science.gov (United States)

    Fox, Eric W; Hill, Ryan A; Leibowitz, Scott G; Olsen, Anthony R; Thornbrugh, Darren J; Weber, Marc H

    2017-07-01

    Random forest (RF) modeling has emerged as an important statistical learning method in ecology due to its exceptional predictive performance. However, for large and complex ecological data sets, there is limited guidance on variable selection methods for RF modeling. Typically, either a preselected set of predictor variables are used or stepwise procedures are employed which iteratively remove variables according to their importance measures. This paper investigates the application of variable selection methods to RF models for predicting probable biological stream condition. Our motivating data set consists of the good/poor condition of n = 1365 stream survey sites from the 2008/2009 National Rivers and Stream Assessment, and a large set (p = 212) of landscape features from the StreamCat data set as potential predictors. We compare two types of RF models: a full variable set model with all 212 predictors and a reduced variable set model selected using a backward elimination approach. We assess model accuracy using RF's internal out-of-bag estimate, and a cross-validation procedure with validation folds external to the variable selection process. We also assess the stability of the spatial predictions generated by the RF models to changes in the number of predictors and argue that model selection needs to consider both accuracy and stability. The results suggest that RF modeling is robust to the inclusion of many variables of moderate to low importance. We found no substantial improvement in cross-validated accuracy as a result of variable reduction. Moreover, the backward elimination procedure tended to select too few variables and exhibited numerous issues such as upwardly biased out-of-bag accuracy estimates and instabilities in the spatial predictions. We use simulations to further support and generalize results from the analysis of real data. A main purpose of this work is to elucidate issues of model selection bias and instability to ecologists interested in

  15. SELECTION OF VARIABLES FOR THE CROATIAN MUNICIPAL SOLID WASTE GENERATION MODEL

    Directory of Open Access Journals (Sweden)

    Anamarija Grbeš

    2017-01-01

    Full Text Available The MSW generation models are important elements of the waste management planning. This paper gives the findings of the second part of the research on Croatian MSW generation mechanism. The correlations of 17 variables are shown. The relationships between the variables are discussed. In the conclusion, independent variables to be hypothesised and tested in a model for the next part of the research are proposed.

  16. Airship Model Tests in the Variable Density Wind Tunnel

    Science.gov (United States)

    Abbott, Ira H

    1932-01-01

    This report presents the results of wind tunnel tests conducted to determine the aerodynamic characteristics of airship models. Eight Goodyear-Zeppelin airship models were tested in the original closed-throat tunnel. After the tunnel was rebuilt with an open throat a new model was tested, and one of the Goodyear-Zeppelin models was retested. The results indicate that much may be done to determine the drag of airships from evaluations of the pressure and skin-frictional drags on models tested at large Reynolds number.

  17. Assessing the accuracy and stability of variable selection methods for random forest modeling in ecology

    Science.gov (United States)

    Random forest (RF) modeling has emerged as an important statistical learning method in ecology due to its exceptional predictive performance. However, for large and complex ecological datasets there is limited guidance on variable selection methods for RF modeling. Typically, e...

  18. Latent variable models an introduction to factor, path, and structural equation analysis

    CERN Document Server

    Loehlin, John C

    2004-01-01

    This fourth edition introduces multiple-latent variable models by utilizing path diagrams to explain the underlying relationships in the models. The book is intended for advanced students and researchers in the areas of social, educational, clinical, ind

  19. A Composite Likelihood Inference in Latent Variable Models for Ordinal Longitudinal Responses

    Science.gov (United States)

    Vasdekis, Vassilis G. S.; Cagnone, Silvia; Moustaki, Irini

    2012-01-01

    The paper proposes a composite likelihood estimation approach that uses bivariate instead of multivariate marginal probabilities for ordinal longitudinal responses using a latent variable model. The model considers time-dependent latent variables and item-specific random effects to be accountable for the interdependencies of the multivariate…

  20. A Composite Likelihood Inference in Latent Variable Models for Ordinal Longitudinal Responses

    Science.gov (United States)

    Vasdekis, Vassilis G. S.; Cagnone, Silvia; Moustaki, Irini

    2012-01-01

    The paper proposes a composite likelihood estimation approach that uses bivariate instead of multivariate marginal probabilities for ordinal longitudinal responses using a latent variable model. The model considers time-dependent latent variables and item-specific random effects to be accountable for the interdependencies of the multivariate…

  1. Functionally relevant climate variables for arid lands: Aclimatic water deficit approach for modelling desert shrub distributions

    Science.gov (United States)

    Thomas E. Dilts; Peter J. Weisberg; Camie M. Dencker; Jeanne C. Chambers

    2015-01-01

    We have three goals. (1) To develop a suite of functionally relevant climate variables for modelling vegetation distribution on arid and semi-arid landscapes of the Great Basin, USA. (2) To compare the predictive power of vegetation distribution models based on mechanistically proximate factors (water deficit variables) and factors that are more mechanistically removed...

  2. Simulation modeling and experimental analysis of thermodynamic charge performance in a variable-mass thermodynamic system

    Institute of Scientific and Technical Information of China (English)

    胡继敏; 金家善; 严志腾

    2013-01-01

    The thermodynamic charge performance of a variable-mass thermodynamic system was investigated by the simulation modeling and experimental analysis. Three sets of experiments were conducted for various charge time and charge steam flow under three different control strategies of charge valve. Characteristic performance parameters from the average sub-cooled degree and the charging energy coefficient point of views were also defined to evaluate and predict the charge performance of system combined with the simulation model and experimental data. The results show that the average steam flow reflects the average sub-cooled degree qualitatively, while the charging energy coefficients of 74.6%, 69.9% and 100% relate to the end value of the average sub-cooled degree at 2.1, 2.9 and 0 respectively for the three sets of experiments. The mean and maximum deviations of the results predicted from those by experimental data are smaller than 6.8% and 10.8%, respectively. In conclusion, the decrease of average steam flow can effectively increase the charging energy coefficient in the same charge time condition and therefore improve the thermodynamic charge performance of system. While the increase of the charging energy coefficient by extending the charge time needs the consideration of the operating frequency for steam users.

  3. Design and Modeling of a Variable Heat Rejection Radiator

    Science.gov (United States)

    Miller, Jennifer R.; Birur, Gajanana C.; Ganapathi, Gani B.; Sunada, Eric T.; Berisford, Daniel F.; Stephan, Ryan

    2011-01-01

    Variable Heat Rejection Radiator technology needed for future NASA human rated & robotic missions Primary objective is to enable a single loop architecture for human-rated missions (1) Radiators are typically sized for maximum heat load in the warmest continuous environment resulting in a large panel area (2) Large radiator area results in fluid being susceptible to freezing at low load in cold environment and typically results in a two-loop system (3) Dual loop architecture is approximately 18% heavier than single loop architecture (based on Orion thermal control system mass) (4) Single loop architecture requires adaptability to varying environments and heat loads

  4. Composite Pressure Vessel Variability in Geometry and Filament Winding Model

    Science.gov (United States)

    Green, Steven J.; Greene, Nathanael J.

    2012-01-01

    Composite pressure vessels (CPVs) are used in a variety of applications ranging from carbon dioxide canisters for paintball guns to life support and pressurant storage on the International Space Station. With widespread use, it is important to be able to evaluate the effect of variability on structural performance. Data analysis was completed on CPVs to determine the amount of variation that occurs among the same type of CPV, and a filament winding routine was developed to facilitate study of the effect of manufacturing variation on structural response.

  5. Causal relationship model between variables using linear regression to improve professional commitment of lecturer

    Science.gov (United States)

    Setyaningsih, S.

    2017-01-01

    The main element to build a leading university requires lecturer commitment in a professional manner. Commitment is measured through willpower, loyalty, pride, loyalty, and integrity as a professional lecturer. A total of 135 from 337 university lecturers were sampled to collect data. Data were analyzed using validity and reliability test and multiple linear regression. Many studies have found a link on the commitment of lecturers, but the basic cause of the causal relationship is generally neglected. These results indicate that the professional commitment of lecturers affected by variables empowerment, academic culture, and trust. The relationship model between variables is composed of three substructures. The first substructure consists of endogenous variables professional commitment and exogenous three variables, namely the academic culture, empowerment and trust, as well as residue variable ɛ y . The second substructure consists of one endogenous variable that is trust and two exogenous variables, namely empowerment and academic culture and the residue variable ɛ 3. The third substructure consists of one endogenous variable, namely the academic culture and exogenous variables, namely empowerment as well as residue variable ɛ 2. Multiple linear regression was used in the path model for each substructure. The results showed that the hypothesis has been proved and these findings provide empirical evidence that increasing the variables will have an impact on increasing the professional commitment of the lecturers.

  6. A Variable Flow Modelling Approach To Military End Strength Planning

    Science.gov (United States)

    2016-12-01

    System Dynamics (SD) model is ideal for strategic analysis as it encompasses all the behaviours of a system and how the behaviours are influenced by...Markov Chain Models Wang describes Markov chain theory as a mathematical tool used to investigate dynamic behaviours of a system in a discrete-time... MODELLING APPROACH TO MILITARY END STRENGTH PLANNING by Benjamin K. Grossi December 2016 Thesis Advisor: Kenneth Doerr Second Reader

  7. From spatially variable streamflow to distributed hydrological models: Analysis of key modeling decisions

    Science.gov (United States)

    Fenicia, Fabrizio; Kavetski, Dmitri; Savenije, Hubert H. G.; Pfister, Laurent

    2016-02-01

    This paper explores the development and application of distributed hydrological models, focusing on the key decisions of how to discretize the landscape, which model structures to use in each landscape element, and how to link model parameters across multiple landscape elements. The case study considers the Attert catchment in Luxembourg—a 300 km2 mesoscale catchment with 10 nested subcatchments that exhibit clearly different streamflow dynamics. The research questions are investigated using conceptual models applied at hydrologic response unit (HRU) scales (1-4 HRUs) on 6 hourly time steps. Multiple model structures are hypothesized and implemented using the SUPERFLEX framework. Following calibration, space/time model transferability is tested using a split-sample approach, with evaluation criteria including streamflow prediction error metrics and hydrological signatures. Our results suggest that: (1) models using geology-based HRUs are more robust and capture the spatial variability of streamflow time series and signatures better than models using topography-based HRUs; this finding supports the hypothesis that, in the Attert, geology exerts a stronger control than topography on streamflow generation, (2) streamflow dynamics of different HRUs can be represented using distinct and remarkably simple model structures, which can be interpreted in terms of the perceived dominant hydrologic processes in each geology type, and (3) the same maximum root zone storage can be used across the three dominant geological units with no loss in model transferability; this finding suggests that the partitioning of water between streamflow and evaporation in the study area is largely independent of geology and can be used to improve model parsimony. The modeling methodology introduced in this study is general and can be used to advance our broader understanding and prediction of hydrological behavior, including the landscape characteristics that control hydrologic response, the

  8. Definably amenable NIP groups

    OpenAIRE

    Chernikov, Artem; Simon, Pierre

    2015-01-01

    We study definably amenable NIP groups. We develop a theory of generics, showing that various definitions considered previously coincide, and study invariant measures. Applications include: characterization of regular ergodic measures, a proof of the conjecture of Petrykowski connecting existence of bounded orbits with definable amenability in the NIP case, and the Ellis group conjecture of Newelski and Pillay connecting the model-theoretic connected component of an NIP group with the ideal s...

  9. Model update and variability assessment for automotive crash simulations

    NARCIS (Netherlands)

    Sun, J.; He, J.; Vlahopoulos, N.; Ast, P. van

    2007-01-01

    In order to develop confidence in numerical models which are used for automotive crash simulations, results are often compared with test data, and in some cases the numerical models are adjusted in order to improve the correlation. Comparisons between the time history of acceleration responses from

  10. Mediating Variables in a Transtheoretical Model Dietary Intervention Program

    Science.gov (United States)

    Di Noia, Jennifer; Prochaska, James O.

    2010-01-01

    This study identified mediators of a Transtheoretical Model (TTM) intervention to increase fruit and vegetable consumption among economically disadvantaged African American adolescents (N = 549). Single-and multiple-mediator models were used to determine whether pros, cons, self-efficacy, and stages of change satisfied four conclusions necessary…

  11. Modelling the diurnal variability of SST and its vertical extent

    DEFF Research Database (Denmark)

    Karagali, Ioanna; Høyer, Jacob L.; Donlon, Craig J.

    2014-01-01

    for the transport of heat, momentum and salt. GOTM is a model resolving the basic hydrodynamic and thermodynamic processes related to vertical mixing in the water column, that includes most of the basic methods for calculating the turbulent fluxes. Surface heat and momentum can be either calculated or externally...... of the modelled output with observations. To improve the surface heat budget calculation and distribution of heat in the water column, the GOTM code was modified to include an additional method for the estimation of the total outgoing long-wave radiation and a 9-band parametrisation for the light extinction...... between in situ and remotely obtained measurements, is through modelling of the upper ocean temperature. Models that have been used for this purpose vary from empirical parametrisations mostly based on the wind speed and solar insolation to ocean models that solve the 1 dimensional equations...

  12. Variability of Protein Structure Models from Electron Microscopy.

    Science.gov (United States)

    Monroe, Lyman; Terashi, Genki; Kihara, Daisuke

    2017-03-02

    An increasing number of biomolecular structures are solved by electron microscopy (EM). However, the quality of structure models determined from EM maps vary substantially. To understand to what extent structure models are supported by information embedded in EM maps, we used two computational structure refinement methods to examine how much structures can be refined using a dataset of 49 maps with accompanying structure models. The extent of structure modification as well as the disagreement between refinement models produced by the two computational methods scaled inversely with the global and the local map resolutions. A general quantitative estimation of deviations of structures for particular map resolutions are provided. Our results indicate that the observed discrepancy between the deposited map and the refined models is due to the lack of structural information present in EM maps and thus these annotations must be used with caution for further applications.

  13. A step-indexed Kripke model of hidden state via recursive properties on recursively defined metric spaces

    DEFF Research Database (Denmark)

    Birkedal, Lars; Schwinghammer, Jan; Støvring, Kristian

    2010-01-01

    for Chargu´eraud and Pottier’s type and capability system including frame and anti-frame rules, based on the operational semantics and step-indexed heap relations. The worlds are constructed as a recursively defined predicate on a recursively defined metric space, which provides a considerably simpler...

  14. Effects of Parceling on Model Selection: Parcel-Allocation Variability in Model Ranking.

    Science.gov (United States)

    Sterba, Sonya K; Rights, Jason D

    2016-01-25

    Research interest often lies in comparing structural model specifications implying different relationships among latent factors. In this context parceling is commonly accepted, assuming the item-level measurement structure is well known and, conservatively, assuming items are unidimensional in the population. Under these assumptions, researchers compare competing structural models, each specified using the same parcel-level measurement model. However, little is known about consequences of parceling for model selection in this context-including whether and when model ranking could vary across alternative item-to-parcel allocations within-sample. This article first provides a theoretical framework that predicts the occurrence of parcel-allocation variability (PAV) in model selection index values and its consequences for PAV in ranking of competing structural models. These predictions are then investigated via simulation. We show that conditions known to manifest PAV in absolute fit of a single model may or may not manifest PAV in model ranking. Thus, one cannot assume that low PAV in absolute fit implies a lack of PAV in ranking, and vice versa. PAV in ranking is shown to occur under a variety of conditions, including large samples. To provide an empirically supported strategy for selecting a model when PAV in ranking exists, we draw on relationships between structural model rankings in parcel- versus item-solutions. This strategy employs the across-allocation modal ranking. We developed software tools for implementing this strategy in practice, and illustrate them with an example. Even if a researcher has substantive reason to prefer one particular allocation, investigating PAV in ranking within-sample still provides an informative sensitivity analysis.

  15. Simulation of heart rate variability model in a network

    Science.gov (United States)

    Cascaval, Radu C.; D'Apice, Ciro; D'Arienzo, Maria Pia

    2017-07-01

    We consider a 1-D model for the simulation of the blood flow in the cardiovascular system. As inflow condition we consider a model for the aortic valve. The opening and closing of the valve is dynamically determined by the pressure difference between the left ventricular and aortic pressures. At the outflow we impose a peripheral resistance model. To approximate the solution we use a numerical scheme based on the discontinuous Galerkin method. We also considering a variation in heart rate and terminal reflection coefficient due to monitoring of the pressure in the network.

  16. Robustness and perturbation in the modeled cascade heart rate variability

    Science.gov (United States)

    Lin, D. C.

    2003-03-01

    In this study, numerical experiments are conducted to examine the robustness of using cascade to describe the multifractal heart rate variability (HRV) by perturbing the hierarchical time scale structure and the multiplicative rule of the cascade. It is shown that a rigid structure of the multiple time scales is not essential for the multifractal scaling in healthy HRV. So long as there exists a tree structure for the multiplication to take place, a multifractal HRV and related properties can be captured by using the cascade. But the perturbation of the multiplicative rule can lead to a qualitative change. In particular, a multifractal to monofractal HRV transition can result after the product law is perturbed to an additive one at the fast time scale. We suggest that this explains the similar HRV scaling transition in the parasympathetic nervous system blockade.

  17. Dynamic modeling of fixed-bed adsorption of flue gas using a variable mass transfer model

    Energy Technology Data Exchange (ETDEWEB)

    Park, Jehun; Lee, Jae W. [Korea Advanced Institute of Science and Technology, Daejeon (Korea, Republic of)

    2016-02-15

    This study introduces a dynamic mass transfer model for the fixed-bed adsorption of a flue gas. The derivation of the variable mass transfer coefficient is based on pore diffusion theory and it is a function of effective porosity, temperature, and pressure as well as the adsorbate composition. Adsorption experiments were done at four different pressures (1.8, 5, 10 and 20 bars) and three different temperatures (30, 50 and 70 .deg. C) with zeolite 13X as the adsorbent. To explain the equilibrium adsorption capacity, the Langmuir-Freundlich isotherm model was adopted, and the parameters of the isotherm equation were fitted to the experimental data for a wide range of pressures and temperatures. Then, dynamic simulations were performed using the system equations for material and energy balance with the equilibrium adsorption isotherm data. The optimal mass transfer and heat transfer coefficients were determined after iterative calculations. As a result, the dynamic variable mass transfer model can estimate the adsorption rate for a wide range of concentrations and precisely simulate the fixed-bed adsorption process of a flue gas mixture of carbon dioxide and nitrogen.

  18. Importance of predictor variables for models of chemical function

    Data.gov (United States)

    U.S. Environmental Protection Agency — Importance of random forest predictors for all classification models of chemical function. This dataset is associated with the following publication: Isaacs , K., M....

  19. Observational Constraints on a Variable Dark Energy Model

    CERN Document Server

    Movahed, M S; Movahed, Mohammad Sadegh; Rahvar, Sohrab

    2006-01-01

    We present cosmological tests for a phenomenological parametrization of quintessence model with time-varying equation of state on low, intermediate and high redshift observations \\cite{w04}. We study the sensitivity of the comoving distance and volume element with the Alcock-Paczynski test to the time varying model of dark energy. At the intermediate redshifts, Gold supernova Type Ia data is used to fit the quintessence model to the observed distance modulus. The value of the observed acoustic angular scale by WMAP experiment also is compared with the model. The combined result of CMB and SNIa data confines $w=p/\\rho$ to be more than -1.3 which can violate the dominant energy condition.

  20. Variable Density Effects in Stochastic Lagrangian Models for Turbulent Combustion

    Science.gov (United States)

    2016-07-20

    PDF methods have proven useful in modelling turbulent combustion, primarily because convection and complex reactions can be treated without the need...modelled transport equation fir the joint PDF of velocity, turbulent frequency and composition (species mass fractions and enthalpy ). The advantages of...PDF methods in dealing with chemical reaction and convection are preserved irrespective of density variation. Since the density variation in a typical

  1. The Functional Segregation and Integration Model: Mixture Model Representations of Consistent and Variable Group-Level Connectivity in fMRI.

    Science.gov (United States)

    Churchill, Nathan W; Madsen, Kristoffer; Mørup, Morten

    2016-10-01

    The brain consists of specialized cortical regions that exchange information between each other, reflecting a combination of segregated (local) and integrated (distributed) processes that define brain function. Functional magnetic resonance imaging (fMRI) is widely used to characterize these functional relationships, although it is an ongoing challenge to develop robust, interpretable models for high-dimensional fMRI data. Gaussian mixture models (GMMs) are a powerful tool for parcellating the brain, based on the similarity of voxel time series. However, conventional GMMs have limited parametric flexibility: they only estimate segregated structure and do not model interregional functional connectivity, nor do they account for network variability across voxels or between subjects. To address these issues, this letter develops the functional segregation and integration model (FSIM). This extension of the GMM framework simultaneously estimates spatial clustering and the most consistent group functional connectivity structure. It also explicitly models network variability, based on voxel- and subject-specific network scaling profiles. We compared the FSIM to standard GMM in a predictive cross-validation framework and examined the importance of different model parameters, using both simulated and experimental resting-state data. The reliability of parcellations is not significantly altered by flexibility of the FSIM, whereas voxel- and subject-specific network scaling profiles significantly improve the ability to predict functional connectivity in independent test data. Moreover, the FSIM provides a set of interpretable parameters to characterize both consistent and variable aspects functional connectivity structure. As an example of its utility, we use subject-specific network profiles to identify brain regions where network expression predicts subject age in the experimental data. Thus, the FSIM is effective at summarizing functional connectivity structure in group

  2. Invariant Gaussian Process Latent Variable Models and Application in Causal Discovery

    CERN Document Server

    Zhang, Kun; Janzing, Dominik

    2012-01-01

    In nonlinear latent variable models or dynamic models, if we consider the latent variables as confounders (common causes), the noise dependencies imply further relations between the observed variables. Such models are then closely related to causal discovery in the presence of nonlinear confounders, which is a challenging problem. However, generally in such models the observation noise is assumed to be independent across data dimensions, and consequently the noise dependencies are ignored. In this paper we focus on the Gaussian process latent variable model (GPLVM), from which we develop an extended model called invariant GPLVM (IGPLVM), which can adapt to arbitrary noise covariances. With the Gaussian process prior put on a particular transformation of the latent nonlinear functions, instead of the original ones, the algorithm for IGPLVM involves almost the same computational loads as that for the original GPLVM. Besides its potential application in causal discovery, IGPLVM has the advantage that its estimat...

  3. Aspects to be considered in case of variable surfaces modelling

    Directory of Open Access Journals (Sweden)

    Mihalache Andrei

    2017-01-01

    Full Text Available Proper triangulation will bring benefits regarding the numerical conditioning of nodes within the point cloud but will also allow certain nodes to be able to move around those that are neighboring them. In this way, surface features as chained regions or curves will have a certain degree of freedom which will allow them to slide one towards another. At a more careful evaluation there is a major drawback in terms of lack of possibility to take into consideration the operations which depend upon approximate shapes inside the ideal shape evaluation process. In order to be able to control the accuracy of the process there is an impetuous need for all surfaces to be described functionally. Any triangulation procedure has to be able to allow control over the analyzed surface topology and any other after modifications: dividing the surface along an incorporated curve in order to define a new edge or sticking two surfaces together in order to form a mutual boundary edge.

  4. Time variability of α from realistic models of Oklo reactors

    Science.gov (United States)

    Gould, C. R.; Sharapov, E. I.; Lamoreaux, S. K.

    2006-08-01

    We reanalyze Oklo Sm149 data using realistic models of the natural nuclear reactors. Disagreements among recent Oklo determinations of the time evolution of α, the electromagnetic fine structure constant, are shown to be due to different reactor models, which led to different neutron spectra used in the calculations. We use known Oklo reactor epithermal spectral indices as criteria for selecting realistic reactor models. Two Oklo reactors, RZ2 and RZ10, were modeled with MCNP. The resulting neutron spectra were used to calculate the change in the Sm149 effective neutron capture cross section as a function of a possible shift in the energy of the 97.3-meV resonance. We independently deduce ancient Sm149 effective cross sections and use these values to set limits on the time variation of α. Our study resolves a contradictory situation with previous Oklo α results. Our suggested 2σ bound on a possible time variation of α over 2 billion years is stringent: -0.11≤Δα/α≤0.24, in units of 10-7, but model dependent in that it assumes only α has varied over time.

  5. A Standard Monetary Model and the Variability of the Deutschemark-DollarExchange Rate

    OpenAIRE

    Kenneth D. West

    1986-01-01

    This paper uses a novel teat to see whether the Herse (1985) and Woo (1985) models are consistent with the variability of the deutschemark - dollar exchange rate 1974-1984. The answer, perhaps surprisingly, is yes. Both models, however, explain the month to month variability as resulting in a critical way from unobservable shocks to money demand and purchasing power parity. It would therefore be of interest in future work to model one or both of these shocks as explicit functions of economic ...

  6. Panel data models extended to spatial error autocorrelation or a spatially lagged dependent variable

    NARCIS (Netherlands)

    Elhorst, J. Paul

    2001-01-01

    This paper surveys panel data models extended to spatial error autocorrelation or a spatially lagged dependent variable. In particular, it focuses on the specification and estimation of four panel data models commonly used in applied research: the fixed effects model, the random effects model, the

  7. A Simple Model of the Variability of Soil Depths

    Directory of Open Access Journals (Sweden)

    Fang Yu

    2017-06-01

    Full Text Available Soil depth tends to vary from a few centimeters to several meters, depending on many natural and environmental factors. We hypothesize that the cumulative effect of these factors on soil depth, which is chiefly dependent on the process of biogeochemical weathering, is particularly affected by soil porewater (i.e., solute transport and infiltration from the land surface. Taking into account evidence for a non-Gaussian distribution of rock weathering rates, we propose a simple mathematical model to describe the relationship between soil depth and infiltration flux. The model was tested using several areas in mostly semi-arid climate zones. The application of this model demonstrates the use of fundamental principles of physics to quantify the coupled effects of the five principal soil-forming factors of Dokuchaev.

  8. Modelling and Experimental Study on Active Energy-Regenerative Suspension Structure with Variable Universe Fuzzy PD Control

    Directory of Open Access Journals (Sweden)

    Jiang Liu

    2016-01-01

    Full Text Available A novel electromagnetic active suspension with an energy-regenerative structure is proposed to solve the suspension’s control consumption problem. For this new system, a 2-DOF quarter-car model is built, and dynamics performances are studied using the variable universe fuzzy theory and the PD control approach. A self-powered efficiency concept is defined to describe the regenerative structure’s contribution to the whole control consumption, and its influent factors are also discussed. Simulations are carried out using software Matlab/Simulink, and experiments are conducted on the B-class road. The results demonstrate that the variable universe fuzzy control can recycle more than 18 percent vibration energy and provide over 11 percent power for the control demand. Furthermore, the new suspension system offers a smaller body acceleration and decreases dynamic tire deflection compared to the passive ones, so as to improve both the ride comfort and the safety.

  9. SST Diurnal Variability: Regional Extent & Implications in Atmospheric Modelling

    DEFF Research Database (Denmark)

    Karagali, Ioanna; Høyer, Jacob L.

    2013-01-01

    and quantify regional diurnal warming from the experimental MSG/SEVIRI hourly SST fields, for the period 2006-2012. ii) To investigate the impact of the increased SST temporal resolution in the atmospheric model WRF, in terms of modeled 10-m winds and surface heat fluxes. Withing this context, 3 main tasks...... regional diurnal warming over the SEVIRI disk, a SEVIRI derived reference field representative of the well mixed night-time conditions is required. Different methodologies are tested and the results are validated against SEVIRI pre-dawn SSTs and in situ data from moored and drifting buoys....

  10. Perturbative corrections for approximate inference in gaussian latent variable models

    DEFF Research Database (Denmark)

    Opper, Manfred; Paquet, Ulrich; Winther, Ole

    2013-01-01

    but intractable correction, and can be applied to the model's partition function and other moments of interest. The correction is expressed over the higher-order cumulants which are neglected by EP's local matching of moments. Through the expansion, we see that EP is correct to first order. By considering higher...... illustrate on tree-structured Ising model approximations. Furthermore, they provide a polynomial-time assessment of the approximation error. We also provide both theoretical and practical insights on the exactness of the EP solution. © 2013 Manfred Opper, Ulrich Paquet and Ole Winther....

  11. Conservation, variability and the modeling of active protein kinases.

    Directory of Open Access Journals (Sweden)

    James D R Knight

    Full Text Available The human proteome is rich with protein kinases, and this richness has made the kinase of crucial importance in initiating and maintaining cell behavior. Elucidating cell signaling networks and manipulating their components to understand and alter behavior require well designed inhibitors. These inhibitors are needed in culture to cause and study network perturbations, and the same compounds can be used as drugs to treat disease. Understanding the structural biology of protein kinases in detail, including their commonalities, differences and modes of substrate interaction, is necessary for designing high quality inhibitors that will be of true use for cell biology and disease therapy. To this end, we here report on a structural analysis of all available active-conformation protein kinases, discussing residue conservation, the novel features of such conservation, unique properties of atypical kinases and variability in the context of substrate binding. We also demonstrate how this information can be used for structure prediction. Our findings will be of use not only in understanding protein kinase function and evolution, but they highlight the flaws inherent in kinase drug design as commonly practiced and dictate an appropriate strategy for the sophisticated design of specific inhibitors for use in the laboratory and disease therapy.

  12. Integrating mixed-effect models into an architectural plant model to simulate inter- and intra-progeny variability: a case study on oil palm (Elaeis guineensis Jacq.).

    Science.gov (United States)

    Perez, Raphaël P A; Pallas, Benoît; Le Moguédec, Gilles; Rey, Hervé; Griffon, Sébastien; Caliman, Jean-Pierre; Costes, Evelyne; Dauzat, Jean

    2016-08-01

    Three-dimensional (3D) reconstruction of plants is time-consuming and involves considerable levels of data acquisition. This is possibly one reason why the integration of genetic variability into 3D architectural models has so far been largely overlooked. In this study, an allometry-based approach was developed to account for architectural variability in 3D architectural models of oil palm (Elaeis guineensis Jacq.) as a case study. Allometric relationships were used to model architectural traits from individual leaflets to the entire crown while accounting for ontogenetic and morphogenetic gradients. Inter- and intra-progeny variabilities were evaluated for each trait and mixed-effect models were used to estimate the mean and variance parameters required for complete 3D virtual plants. Significant differences in leaf geometry (petiole length, density of leaflets, and rachis curvature) and leaflet morphology (gradients of leaflet length and width) were detected between and within progenies and were modelled in order to generate populations of plants that were consistent with the observed populations. The application of mixed-effect models on allometric relationships highlighted an interesting trade-off between model accuracy and ease of defining parameters for the 3D reconstruction of plants while at the same time integrating their observed variability. Future research will be dedicated to sensitivity analyses coupling the structural model presented here with a radiative balance model in order to identify the key architectural traits involved in light interception efficiency. © The Author 2016. Published by Oxford University Press on behalf of the Society for Experimental Biology. All rights reserved. For permissions, please email: journals.permissions@oup.com.

  13. A multi-dimensional model for localization of highly variable objects

    Science.gov (United States)

    Ruppertshofen, Heike; Bülow, Thomas; von Berg, Jens; Schmidt, Sarah; Beyerlein, Peter; Salah, Zein; Rose, Georg; Schramm, Hauke

    2012-02-01

    In this work, we present a new type of model for object localization, which is well suited for anatomical objects exhibiting large variability in size, shape and posture, for usage in the discriminative generalized Hough transform (DGHT). The DGHT combines the generalized Hough transform (GHT) with a discriminative training approach to automatically obtain robust and efficient models. It has been shown to be a strong tool for object localization capable of handling a rather large amount of shape variability. For some tasks, however, the variability exhibited by different occurrences of the target object becomes too large to be represented by a standard DGHT model. To be able to capture such highly variable objects, several sub-models, representing the modes of variability as seen by the DGHT, are created automatically and are arranged in a higher dimensional model. The modes of variability are identified on-the-fly during training in an unsupervised manner. Following the concept of the DGHT, the sub-models are jointly trained with respect to a minimal localization error employing the discriminative training approach. The procedure is tested on a dataset of thorax radiographs with the target to localize the clavicles. Due to different arm positions, the posture and arrangement of the target and surrounding bones differs strongly, which hampers the training of a good localization model. Employing the new model approach the localization rate improves by 13% on unseen test data compared to the standard model.

  14. Research and development of models and instruments to define, measure, and improve shared information processing with government oversight agencies. An analysis of the literature, August 1990--January 1992

    Energy Technology Data Exchange (ETDEWEB)

    1992-12-31

    This document identifies elements of sharing, plus key variables of each and their interrelationships. The document`s model of sharing is intended to help management systems` users understand what sharing is and how to integrate it with information processing.

  15. Effects of Cueing and Modeling Variables in Teacher Training Systems.

    Science.gov (United States)

    Orme, Michael E. J.

    Theoretical considerations suggest that the differential effectiveness of teaching models and associated feedback procedures stems from their distinctive cueing properties. This led to the development of three treatment conditions which may be labeled "rating" (rehearsal of key discriminations), "observation" vicarious reinforcement), and "direct…

  16. 2-D Chemical-Dynamical Modeling of Venus's Sulfur Variability

    Science.gov (United States)

    Bierson, Carver J.; Zhang, Xi

    2016-10-01

    Over the last decade a combination of ground based and Venus Express observations have been made of the concentration of sulfur species in Venus's atmosphere, both above [1, 2] and below the clouds [3, 4]. These observations put constraints on both the vertical and meridional variations of the major sulfur species in Venus's atmosphere.. It has also been observed that SO2 concentrations varies on both timescales of hours and years [1,4]. The spatial and temporal distribution of tracer species is owing to two possibilities: mutual chemical interaction and dynamical tracer transport.Previous Chemical modeling of Venus's middle atmosphere has only been explored in 1-D. We will present the first 2-D (altitude and latitude) chemical-dynamical model for Venus's middle atmosphere. The sulfur chemistry is based on of the 1D model of Zhang et al. 2012 [5]. We do model runs over multiple Venus decades testing two scenarios: first one with varying sulfur fluxes from below, and second with secular dynamical perturbations in the atmosphere [6]. By comparing to Venus Express and ground based observations, we put constraints on the dynamics of Venus's middle atmosphere.References: [1] Belyaev et al. Icarus 2012 [2] Marcq et al. Nature geoscience, 2013 [3] Marcq et al. JGR:Planets, 2008 [4] Arney et al. JGR:Planets, 2014 [5] Zhang et al. Icarus 2012 [6] Parish et al. Icarus 2012

  17. Variable Selection for Varying-Coefficient Models with Missing Response at Random

    Institute of Scientific and Technical Information of China (English)

    Pei Xin ZHAO; Liu Gen XUE

    2011-01-01

    In this paper, we present a variable selection procedure by combining basis function approximations with penalized estimating equations for varying-coefficient models with missing response at random. With appropriate selection of the tuning parameters, we establish the consistency of the variable selection procedure and the optimal convergence rate of the regularized estimators. A simulation study is undertaken to assess the finite sample performance of the proposed variable selection procedure.

  18. THE EFFECTS OF DIFFERENT MODELS OF SWIMMING TRAINING (DEFINED IN RELATION TO ANAEROBIC THRESHOLD ON THE INCREASE OF SWIM SPEED

    Directory of Open Access Journals (Sweden)

    Dragan Krivokapić

    2007-05-01

    Full Text Available On the sample of 32 fourth grade students of some Belgrade highs schools, who had the physical education classes carried out at the city’s swimming pools, an attempt was made to evaluate the effects of the two different programmes of swimming training in different intensity zones, defi ned relative to the anaerobic threshold. The examinees were divided into two groups out of 15 i.e. 17 participants who were not (according to statistics signifi cantly different in terms of average time and heart frequency during the 400 m swimming test and heart frequency and time measured after 50 m in the moment of reaching the anaerobic threshold. The fi rst training model consisted of swimming at the intensity level within the zone below anaerobic threshold, while the second model involved occasional swimming at a higher intensity sometimes surpassing the anaerobic threshold. The experimentalprogramme with both sub-groups lasted 8 weeks with 3 training sessions per week, 2 ‘of which we’re identical for both experimental groups, with the third one differing regarding the swimming intensity, this in the fi rst group being still in the zone below, and in the second group occasionally in the zone above the anaerobic threshold. The amount of training and the duration were the same in both programmes. The aim of the research , was to evaluate and to compare the effects of the two training models, using as the basic criteria possible changes of average time and heart frequency during the 400 m swimming test and heart frequency and time measured after 50 m in the moment of reaching the anaerobic thereshold. On the basis of the statistical analysis of the obtained data, it is possible to conclude that in both experimental groups there were statistically signifi cant changes of average values concerning all the physiological variables. Although the difference in effi ciency of applied experimental programmes is not defi ned, we can claim that both of experimental

  19. A latent class distance association model for cross-classified data with a categorical response variable.

    Science.gov (United States)

    Vera, José Fernando; de Rooij, Mark; Heiser, Willem J

    2014-11-01

    In this paper we propose a latent class distance association model for clustering in the predictor space of large contingency tables with a categorical response variable. The rows of such a table are characterized as profiles of a set of explanatory variables, while the columns represent a single outcome variable. In many cases such tables are sparse, with many zero entries, which makes traditional models problematic. By clustering the row profiles into a few specific classes and representing these together with the categories of the response variable in a low-dimensional Euclidean space using a distance association model, a parsimonious prediction model can be obtained. A generalized EM algorithm is proposed to estimate the model parameters and the adjusted Bayesian information criterion statistic is employed to test the number of mixture components and the dimensionality of the representation. An empirical example highlighting the advantages of the new approach and comparing it with traditional approaches is presented.

  20. Partitioning the impacts of spatial and climatological rainfall variability in urban drainage modeling

    Science.gov (United States)

    Peleg, Nadav; Blumensaat, Frank; Molnar, Peter; Fatichi, Simone; Burlando, Paolo

    2017-03-01

    The performance of urban drainage systems is typically examined using hydrological and hydrodynamic models where rainfall input is uniformly distributed, i.e., derived from a single or very few rain gauges. When models are fed with a single uniformly distributed rainfall realization, the response of the urban drainage system to the rainfall variability remains unexplored. The goal of this study was to understand how climate variability and spatial rainfall variability, jointly or individually considered, affect the response of a calibrated hydrodynamic urban drainage model. A stochastic spatially distributed rainfall generator (STREAP - Space-Time Realizations of Areal Precipitation) was used to simulate many realizations of rainfall for a 30-year period, accounting for both climate variability and spatial rainfall variability. The generated rainfall ensemble was used as input into a calibrated hydrodynamic model (EPA SWMM - the US EPA's Storm Water Management Model) to simulate surface runoff and channel flow in a small urban catchment in the city of Lucerne, Switzerland. The variability of peak flows in response to rainfall of different return periods was evaluated at three different locations in the urban drainage network and partitioned among its sources. The main contribution to the total flow variability was found to originate from the natural climate variability (on average over 74 %). In addition, the relative contribution of the spatial rainfall variability to the total flow variability was found to increase with longer return periods. This suggests that while the use of spatially distributed rainfall data can supply valuable information for sewer network design (typically based on rainfall with return periods from 5 to 15 years), there is a more pronounced relevance when conducting flood risk assessments for larger return periods. The results show the importance of using multiple distributed rainfall realizations in urban hydrology studies to capture the

  1. Representing general theoretical concepts in structural equation models: The role of composite variables

    Science.gov (United States)

    Grace, J.B.; Bollen, K.A.

    2008-01-01

    Structural equation modeling (SEM) holds the promise of providing natural scientists the capacity to evaluate complex multivariate hypotheses about ecological systems. Building on its predecessors, path analysis and factor analysis, SEM allows for the incorporation of both observed and unobserved (latent) variables into theoretically-based probabilistic models. In this paper we discuss the interface between theory and data in SEM and the use of an additional variable type, the composite. In simple terms, composite variables specify the influences of collections of other variables and can be helpful in modeling heterogeneous concepts of the sort commonly of interest to ecologists. While long recognized as a potentially important element of SEM, composite variables have received very limited use, in part because of a lack of theoretical consideration, but also because of difficulties that arise in parameter estimation when using conventional solution procedures. In this paper we present a framework for discussing composites and demonstrate how the use of partially-reduced-form models can help to overcome some of the parameter estimation and evaluation problems associated with models containing composites. Diagnostic procedures for evaluating the most appropriate and effective use of composites are illustrated with an example from the ecological literature. It is argued that an ability to incorporate composite variables into structural equation models may be particularly valuable in the study of natural systems, where concepts are frequently multifaceted and the influence of suites of variables are often of interest. ?? Springer Science+Business Media, LLC 2007.

  2. Solar Spectral Irradiance Variability in Cycle 24: Observations and Models

    CERN Document Server

    Marchenko, S V; Lean, J L

    2016-01-01

    Utilizing the excellent stability of the Ozone Monitoring Instrument (OMI), we characterize both short-term (solar rotation) and long-term (solar cycle) changes of the solar spectral irradiance (SSI) between 265-500 nm during the on-going Cycle 24. We supplement the OMI data with concurrent observations from the GOME-2 and SORCE instruments and find fair-to-excellent, depending on wavelength, agreement among the observations and predictions of the NRLSSI2 and SATIRE-S models.

  3. Modelling the $\\gamma$-ray variability of 3C 273

    CERN Document Server

    Zheng, Y G; Huang, B R; Kang, S J

    2016-01-01

    We investigate MeV-GeV $\\gamma$-ray outbursts in 3C 273 in the frame of a time-dependent one-zone synchrotron self-Compton (SSC) model. In this model, electrons are accelerated to extra-relativistic energy through the stochastic particle acceleration and evolve with the time, nonthermal photons are produced by both synchrotron and inverse Compton scattering of synchrotron photons. Moreover, nonthermal photons during a quiescent are produced by the relativistic electrons in the steady state and those during a outburst are produced by the electrons whose injection rate is changed at some time interval. We apply the model to two exceptionally luminous $\\gamma$-ray outbursts observed by the Fermi-LAT from 3C 273 in September, 2009 and obtain the multi-wavelength spectra during the quiescent and during the outburst states, respectively. Our results show that the time-dependent properties of outbursts can be reproduced by adopting the appropriate injection rate function of the electron population.

  4. Identifying Spatially Variable Sensitivity of Model Predictions and Calibrations

    Science.gov (United States)

    McKenna, S. A.; Hart, D. B.

    2005-12-01

    Stochastic inverse modeling provides an ensemble of stochastic property fields, each calibrated to measured steady-state and transient head data. These calibrated fields are used as input for predictions of other processes (e.g., contaminant transport, advective travel time). Use of the entire ensemble of fields transfers spatial uncertainty in hydraulic properties to uncertainty in the predicted performance measures. A sampling-based sensitivity coefficient is proposed to determine the sensitivity of the performance measures to the uncertain values of hydraulic properties at every cell in the model domain. The basis of this sensitivity coefficient is the Spearman rank correlation coefficient. Sampling-based sensitivity coefficients are demonstrated using a recent set of transmissivity (T) fields created through a stochastic inverse calibration process for the Culebra dolomite in the vicinity of the WIPP site in southeastern New Mexico. The stochastic inverse models were created using a unique approach to condition a geologically-based conceptual model of T to measured T values via a multiGaussian residual field. This field is calibrated to both steady-state and transient head data collected over an 11 year period. Maps of these sensitivity coefficients provide a means of identifying the locations in the study area to which both the value of the model calibration objective function and the predicted travel times to a regulatory boundary are most sensitive to the T and head values. These locations can be targeted for deployment of additional long-term monitoring resources. Comparison of areas where the calibration objective function and the travel time have high sensitivity shows that these are not necessarily coincident with regions of high uncertainty. The sampling-based sensitivity coefficients are compared to analytically derived sensitivity coefficients at the 99 pilot point locations. Results of the sensitivity mapping exercise are being used in combination

  5. A latent class distance association model for cross-classified data with a categorical response variable

    NARCIS (Netherlands)

    Vera, J.F.; Rooij, M. de; Heiser, W.J.

    2014-01-01

    In this paper we propose a latent class distance association model for clustering in the predictor space of large contingency tables with a categorical response variable. The rows of such a table are characterized as profiles of a set of explanatory variables, while the columns represent a single ou

  6. Bayesian Methods for Analyzing Structural Equation Models with Covariates, Interaction, and Quadratic Latent Variables

    Science.gov (United States)

    Lee, Sik-Yum; Song, Xin-Yuan; Tang, Nian-Sheng

    2007-01-01

    The analysis of interaction among latent variables has received much attention. This article introduces a Bayesian approach to analyze a general structural equation model that accommodates the general nonlinear terms of latent variables and covariates. This approach produces a Bayesian estimate that has the same statistical optimal properties as a…

  7. Spatial aggregation for crop modelling at regional scales: the effects of soil variability

    Science.gov (United States)

    Coucheney, Elsa; Villa, Ana; Eckersten, Henrik; Hoffmann, Holger; Jansson, Per-Erik; Gaiser, Thomas; Ewert, Franck; Lewan, Elisabet

    2017-04-01

    Modelling agriculture production and adaptation to the environment at regional or global scale receives much interest in the context of climate change. Process-based soil-crop models describe the flows of mass (i.e. water, carbon and nitrogen) and energy in the soil-plant-atmosphere system. As such, they represent valuable tools for predicting agricultural production in diverse agro-environmental contexts as well as for assessing impacts on the environment; e.g. leaching of nitrates, changes in soil carbon content and GHGs emissions. However, their application at regional and global scales for climate change impact studies raises new challenges related to model input data, calibration and evaluation. One major concern is to take into account the spatial variability of the environmental conditions (e.g. climate, soils, management practices) used as model input and because the impacts of climate change on cropping systems depend strongly on the site conditions and properties (1). For example climate change effects on yield can be either negative or positive depending on the soil type (2). Additionally, the use of different methods of upscaling and downscaling adds new sources of modelling uncertainties (3). In the present study, the effect of aggregating soil input data by area majority of soil mapping units was explored for spatially gridded simulations with the soil-vegetation model CoupModel for a region in Germany (North Rhine-Westphalia, NRW). The data aggregation effect (DAE) was analysed for wheat yield, water drainage, soil carbon mineralisation and nitrogen leaching below the root zone. DAE was higher for soil C and N variables than for yield and drainage and were strongly related to the spatial coverage of specific soils within the study region. These 'key soils' were identified by a model sensitivity analysis to soils present in the NRW region. The spatial aggregation of the key soils additionally influenced the DAE. Our results suggest that a spatial

  8. Statistical modeling methods to analyze the impacts of multiunit process variability on critical quality attributes of Chinese herbal medicine tablets.

    Science.gov (United States)

    Sun, Fei; Xu, Bing; Zhang, Yi; Dai, Shengyun; Yang, Chan; Cui, Xianglong; Shi, Xinyuan; Qiao, Yanjiang

    2016-01-01

    The quality of Chinese herbal medicine tablets suffers from batch-to-batch variability due to a lack of manufacturing process understanding. In this paper, the Panax notoginseng saponins (PNS) immediate release tablet was taken as the research subject. By defining the dissolution of five active pharmaceutical ingredients and the tablet tensile strength as critical quality attributes (CQAs), influences of both the manipulated process parameters introduced by an orthogonal experiment design and the intermediate granules' properties on the CQAs were fully investigated by different chemometric methods, such as the partial least squares, the orthogonal projection to latent structures, and the multiblock partial least squares (MBPLS). By analyzing the loadings plots and variable importance in the projection indexes, the granule particle sizes and the minimal punch tip separation distance in tableting were identified as critical process parameters. Additionally, the MBPLS model suggested that the lubrication time in the final blending was also important in predicting tablet quality attributes. From the calculated block importance in the projection indexes, the tableting unit was confirmed to be the critical process unit of the manufacturing line. The results demonstrated that the combinatorial use of different multivariate modeling methods could help in understanding the complex process relationships as a whole. The output of this study can then be used to define a control strategy to improve the quality of the PNS immediate release tablet.

  9. ROC Analysis and a Realistic Model of Heart Rate Variability

    CERN Document Server

    Thurner, S; Teich, M C; Thurner, Stefan; Feurstein, Markus C.; Teich, Malvin C.

    1998-01-01

    We have carried out a pilot study on a standard collection of electrocardiograms from patients who suffer from congestive heart failure, and subjects without cardiac pathology, using receiver-operating-characteristic (ROC) analysis. The scale-dependent wavelet-coefficient standard deviation superior to two commonly used measures of cardiac dysfunction when the two classes of patients cannot be completely separated. A jittered integrate-and-fire model with a fractal Gaussian-noise kernel provides a realistic simulation of heartbeat sequences for both heart-failure patients and normal subjects.

  10. Temporal analysis of text data using latent variable models

    DEFF Research Database (Denmark)

    Mølgaard, Lasse Lohilahti; Larsen, Jan; Goutte, Cyril

    2009-01-01

    Detecting and tracking of temporal data is an important task in multiple applications. In this paper we study temporal text mining methods for Music Information Retrieval. We compare two ways of detecting the temporal latent semantics of a corpus extracted from Wikipedia, using a stepwise...... Probabilistic Latent Semantic Analysis (PLSA) approach and a global multiway PLSA method. The analysis indicates that the global analysis method is able to identify relevant trends which are difficult to get using a step-by-step approach. Furthermore we show that inspection of PLSA models with different number...

  11. Solar spectral irradiance variability in cycle 24: observations and models

    Directory of Open Access Journals (Sweden)

    Marchenko Sergey V.

    2016-01-01

    Full Text Available Utilizing the excellent stability of the Ozone Monitoring Instrument (OMI, we characterize both short-term (solar rotation and long-term (solar cycle changes of the solar spectral irradiance (SSI between 265 and 500 nm during the ongoing cycle 24. We supplement the OMI data with concurrent observations from the Global Ozone Monitoring Experiment-2 (GOME-2 and Solar Radiation and Climate Experiment (SORCE instruments and find fair-to-excellent, depending on wavelength, agreement among the observations, and predictions of the Naval Research Laboratory Solar Spectral Irradiance (NRLSSI2 and Spectral And Total Irradiance REconstruction for the Satellite era (SATIRE-S models.

  12. Surface air temperature variability in global climate models

    CERN Document Server

    Davy, Richard

    2012-01-01

    New results from the Coupled Model Inter-comparison Project phase 5 (CMIP5) and multiple global reanalysis datasets are used to investigate the relationship between the mean and standard deviation in the surface air temperature. A combination of a land-sea mask and orographic filter were used to investigate the geographic region with the strongest correlation and in all cases this was found to be for low-lying over-land locations. This result is consistent with the expectation that differences in the effective heat capacity of the atmosphere are an important factor in determining the surface air temperature response to forcing.

  13. A semi-analytical variable property droplet combustion model

    Science.gov (United States)

    Sisti, John

    A multizone droplet burn model is developed to account for changes in the thermal and transport properties as a function of droplet radius. The formulation is semi-analytical---allowing for accurate and computationally efficient estimates of flame structure and burn rates. Zonal thermal and transport properties are computed using the Cantera software and pre-tabulated for rapid evaluation during run-time. Model predictions are compared to experimental measurements of burning n-heptane, ethanol and methanol droplets. An adaptive zone refinement algorithm is developed that minimizes the number of zones required to provide accurate estimates of burn time without excess zones. A sensitivity study of burn rate and flame stand-off with far-field oxygen concentration is conducted with comparisons to experimental data. Overall agreement to data is encouraging with errors typically less than 20% for predictions of burn rates, stand-off ratio and flame temperature for the fuels considered. The quiescent quasi-steady solution is extended to a convective transient solution without the need to solve an eigenvalue solution in time. The time history of the burning droplets show good comparison with experimental data. To further decrease computational cost, the source terms for the transient solution are linearized for an explicit time marching solution. An error convergence study was performed to show a time-step independent solution exists at a reasonable Delta t.

  14. Validation of Generic Models for Variable Speed Operation Wind Turbines Following the Recent Guidelines Issued by IEC 61400-27

    Directory of Open Access Journals (Sweden)

    Andrés Honrubia-Escribano

    2016-12-01

    Full Text Available Considerable efforts are currently being made by several international working groups focused on the development of generic, also known as simplified or standard, wind turbine models for power system stability studies. In this sense, the first edition of International Electrotechnical Commission (IEC 61400-27-1, which defines generic dynamic simulation models for wind turbines, was published in February 2015. Nevertheless, the correlations of the IEC generic models with respect to specific wind turbine manufacturer models are required by the wind power industry to validate the accuracy and corresponding usability of these standard models. The present work conducts the validation of the two topologies of variable speed wind turbines that present not only the largest market share, but also the most technological advances. Specifically, the doubly-fed induction machine and the full-scale converter (FSC topology are modeled based on the IEC 61400-27-1 guidelines. The models are simulated for a wide range of voltage dips with different characteristics and wind turbine operating conditions. The simulated response of the IEC generic model is compared to the corresponding simplified model of a wind turbine manufacturer, showing a good correlation in most cases. Validation error sources are analyzed in detail, as well. In addition, this paper reviews in detail the previous work done in this field. Results suggest that wind turbine manufacturers are able to adjust the IEC generic models to represent the behavior of their specific wind turbines for power system stability analysis.

  15. Higher-Order Process Modeling: Product-Lining, Variability Modeling and Beyond

    Directory of Open Access Journals (Sweden)

    Johannes Neubauer

    2013-09-01

    Full Text Available We present a graphical and dynamic framework for binding and execution of business process models. It is tailored to integrate 1 ad hoc processes modeled graphically, 2 third party services discovered in the (Internet, and 3 (dynamically synthesized process chains that solve situation-specific tasks, with the synthesis taking place not only at design time, but also at runtime. Key to our approach is the introduction of type-safe stacked second-order execution contexts that allow for higher-order process modeling. Tamed by our underlying strict service-oriented notion of abstraction, this approach is tailored also to be used by application experts with little technical knowledge: users can select, modify, construct and then pass (component processes during process execution as if they were data. We illustrate the impact and essence of our framework along a concrete, realistic (business process modeling scenario: the development of Springer's browser-based Online Conference Service (OCS. The most advanced feature of our new framework allows one to combine online synthesis with the integration of the synthesized process into the running application. This ability leads to a particularly flexible way of implementing self-adaption, and to a particularly concise and powerful way of achieving variability not only at design time, but also at runtime.

  16. Effect of climate variables on cocoa black pod incidence in Sabah using ARIMAX model

    Science.gov (United States)

    Ling Sheng Chang, Albert; Ramba, Haya; Mohd. Jaaffar, Ahmad Kamil; Kim Phin, Chong; Chong Mun, Ho

    2016-06-01

    Cocoa black pod disease is one of the major diseases affecting the cocoa production in Malaysia and also around the world. Studies have shown that the climate variables have influenced the cocoa black pod disease incidence and it is important to quantify the black pod disease variation due to the effect of climate variables. Application of time series analysis especially auto-regressive moving average (ARIMA) model has been widely used in economics study and can be used to quantify the effect of climate variables on black pod incidence to forecast the right time to control the incidence. However, ARIMA model does not capture some turning points in cocoa black pod incidence. In order to improve forecasting performance, other explanatory variables such as climate variables should be included into ARIMA model as ARIMAX model. Therefore, this paper is to study the effect of climate variables on the cocoa black pod disease incidence using ARIMAX model. The findings of the study showed ARIMAX model using MA(1) and relative humidity at lag 7 days, RHt - 7 gave better R square value compared to ARIMA model using MA(1) which could be used to forecast the black pod incidence to assist the farmers determine timely application of fungicide spraying and culture practices to control the black pod incidence.

  17. Variable Selection for Semiparametric Varying-Coefficient Partially Linear Models with Missing Response at Random

    Institute of Scientific and Technical Information of China (English)

    Pei Xin ZHAO; Liu Gen XUE

    2011-01-01

    In this paper,we present a variable selection procedure by combining basis function approximations with penalized estimating equations for semiparametric varying-coefficient partially linear models with missing response at random.The proposed procedure simultaneously selects significant variables in parametric components and nonparametric components.With appropriate selection of the tuning parameters,we establish the consistency of the variable selection procedure and the convergence rate of the regularized estimators.A simulation study is undertaken to assess the finite sample performance of the proposed variable selection procedure.

  18. Quantifying variability in earthquake rupture models using multidimensional scaling: application to the 2011 Tohoku earthquake

    KAUST Repository

    Razafindrakoto, Hoby

    2015-04-22

    Finite-fault earthquake source inversion is an ill-posed inverse problem leading to non-unique solutions. In addition, various fault parametrizations and input data may have been used by different researchers for the same earthquake. Such variability leads to large intra-event variability in the inferred rupture models. One way to understand this problem is to develop robust metrics to quantify model variability. We propose a Multi Dimensional Scaling (MDS) approach to compare rupture models quantitatively. We consider normalized squared and grey-scale metrics that reflect the variability in the location, intensity and geometry of the source parameters. We test the approach on two-dimensional random fields generated using a von Kármán autocorrelation function and varying its spectral parameters. The spread of points in the MDS solution indicates different levels of model variability. We observe that the normalized squared metric is insensitive to variability of spectral parameters, whereas the grey-scale metric is sensitive to small-scale changes in geometry. From this benchmark, we formulate a similarity scale to rank the rupture models. As case studies, we examine inverted models from the Source Inversion Validation (SIV) exercise and published models of the 2011 Mw 9.0 Tohoku earthquake, allowing us to test our approach for a case with a known reference model and one with an unknown true solution. The normalized squared and grey-scale metrics are respectively sensitive to the overall intensity and the extension of the three classes of slip (very large, large, and low). Additionally, we observe that a three-dimensional MDS configuration is preferable for models with large variability. We also find that the models for the Tohoku earthquake derived from tsunami data and their corresponding predictions cluster with a systematic deviation from other models. We demonstrate the stability of the MDS point-cloud using a number of realizations and jackknife tests, for

  19. Researches on the Model of Telecommunication Service with Variable Input Tariff Rates

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    The paper sets up and studies the model of the telecommunication queue servicing system with variable input tariff rates, which can relieve the crowding system traffic flows during the busy hour to enhance the utilizing rate of the telecom's resources.

  20. A REVIEW OF THE ANALYSIS OF MOISTURE VARIABLES AND THE APPLICATION IN NUMERICAL MODELS

    Institute of Scientific and Technical Information of China (English)

    WANG Zai-zhi; YAN Jing-hua

    2007-01-01

    The application of the explicit microphysical process in the high-resolution mesoscale numerical models makes it necessary to analyze the moisture variables such as the cloud water, cloud ice and rain water to initialize the explicit predicted fields. While the inclusion of moisture variables in initial fields can influence the whole performance of the model significantly, it can also reduce the spin-up time and increase the short-term forecasting ability of the model since the dynamical fields become more accordant with the thermodynamic fields. Now the increase of the observing ability and the abundance of the data promote the development of ways to analyze the moisture variables. A review of some methods to analyze the moisture variables is presented, and the situation and problems of the application in the numerical models are also discussed in this paper.

  1. Variable Tension, Large Deflection Ideal String Model For Transverse Motions

    CERN Document Server

    Ciblak, Namik

    2013-01-01

    In this study a new approach to the problem of transverse vibrations of an ideal string is presented. Unlike previous studies, assumptions such as constant tension, inextensibility, constant crosssectional area, small deformations and slopes are all removed. The main result is that, despite such relaxations in the model, not only does the final equation remain linear, but, it is exactly the same equation obtained in classical treatments. First, an "infinitesimals" based analysis, similar to historical methods, is given. However, an alternative and much stronger approach, solely based on finite quantities, is also presented. Furthermore, it is shown that the same result can also be obtained by Lagrangian mechanics, which indicates the compatibility of the original method with those based on energy and variational principles. Another interesting result is the relation between the force distribution and string displacement in static cases, which states that the force distribution per length is proportional to th...

  2. Asymptomatic Alzheimer disease: Defining resilience.

    Science.gov (United States)

    Hohman, Timothy J; McLaren, Donald G; Mormino, Elizabeth C; Gifford, Katherine A; Libon, David J; Jefferson, Angela L

    2016-12-06

    To define robust resilience metrics by leveraging CSF biomarkers of Alzheimer disease (AD) pathology within a latent variable framework and to demonstrate the ability of such metrics to predict slower rates of cognitive decline and protection against diagnostic conversion. Participants with normal cognition (n = 297) and mild cognitive impairment (n = 432) were drawn from the Alzheimer's Disease Neuroimaging Initiative. Resilience metrics were defined at baseline by examining the residuals when regressing brain aging outcomes (hippocampal volume and cognition) on CSF biomarkers. A positive residual reflected better outcomes than expected for a given level of pathology (high resilience). Residuals were integrated into a latent variable model of resilience and validated by testing their ability to independently predict diagnostic conversion, cognitive decline, and the rate of ventricular dilation. Latent variables of resilience predicted a decreased risk of conversion (hazard ratio 0.02, p < 0.001), and slower rates of ventricular dilation (β < -4.7, p < 2 × 10(-15)). These results were significant even when analyses were restricted to clinically normal individuals. Furthermore, resilience metrics interacted with biomarker status such that biomarker-positive individuals with low resilience showed the greatest risk of subsequent decline. Robust phenotypes of resilience calculated by leveraging AD biomarkers and baseline brain aging outcomes provide insight into which individuals are at greatest risk of short-term decline. Such comprehensive definitions of resilience are needed to further our understanding of the mechanisms that protect individuals from the clinical manifestation of AD dementia, especially among biomarker-positive individuals. © 2016 American Academy of Neurology.

  3. Boosting model performance and interpretation by entangling preprocessing selection and variable selection.

    Science.gov (United States)

    Gerretzen, Jan; Szymańska, Ewa; Bart, Jacob; Davies, Antony N; van Manen, Henk-Jan; van den Heuvel, Edwin R; Jansen, Jeroen J; Buydens, Lutgarde M C

    2016-09-28

    The aim of data preprocessing is to remove data artifacts-such as a baseline, scatter effects or noise-and to enhance the contextually relevant information. Many preprocessing methods exist to deliver one or more of these benefits, but which method or combination of methods should be used for the specific data being analyzed is difficult to select. Recently, we have shown that a preprocessing selection approach based on Design of Experiments (DoE) enables correct selection of highly appropriate preprocessing strategies within reasonable time frames. In that approach, the focus was solely on improving the predictive performance of the chemometric model. This is, however, only one of the two relevant criteria in modeling: interpretation of the model results can be just as important. Variable selection is often used to achieve such interpretation. Data artifacts, however, may hamper proper variable selection by masking the true relevant variables. The choice of preprocessing therefore has a huge impact on the outcome of variable selection methods and may thus hamper an objective interpretation of the final model. To enhance such objective interpretation, we here integrate variable selection into the preprocessing selection approach that is based on DoE. We show that the entanglement of preprocessing selection and variable selection not only improves the interpretation, but also the predictive performance of the model. This is achieved by analyzing several experimental data sets of which the true relevant variables are available as prior knowledge. We show that a selection of variables is provided that complies more with the true informative variables compared to individual optimization of both model aspects. Importantly, the approach presented in this work is generic. Different types of models (e.g. PCR, PLS, …) can be incorporated into it, as well as different variable selection methods and different preprocessing methods, according to the taste and experience of

  4. Molecular profiling of breast cancer cell lines defines relevant tumor models and provides a resource for cancer gene discovery.

    Directory of Open Access Journals (Sweden)

    Jessica Kao

    Full Text Available BACKGROUND: Breast cancer cell lines have been used widely to investigate breast cancer pathobiology and new therapies. Breast cancer is a molecularly heterogeneous disease, and it is important to understand how well and which cell lines best model that diversity. In particular, microarray studies have identified molecular subtypes-luminal A, luminal B, ERBB2-associated, basal-like and normal-like-with characteristic gene-expression patterns and underlying DNA copy number alterations (CNAs. Here, we studied a collection of breast cancer cell lines to catalog molecular profiles and to assess their relation to breast cancer subtypes. METHODS: Whole-genome DNA microarrays were used to profile gene expression and CNAs in a collection of 52 widely-used breast cancer cell lines, and comparisons were made to existing profiles of primary breast tumors. Hierarchical clustering was used to identify gene-expression subtypes, and Gene Set Enrichment Analysis (GSEA to discover biological features of those subtypes. Genomic and transcriptional profiles were integrated to discover within high-amplitude CNAs candidate cancer genes with coordinately altered gene copy number and expression. FINDINGS: Transcriptional profiling of breast cancer cell lines identified one luminal and two basal-like (A and B subtypes. Luminal lines displayed an estrogen receptor (ER signature and resembled luminal-A/B tumors, basal-A lines were associated with ETS-pathway and BRCA1 signatures and resembled basal-like tumors, and basal-B lines displayed mesenchymal and stem/progenitor-cell characteristics. Compared to tumors, cell lines exhibited similar patterns of CNA, but an overall higher complexity of CNA (genetically simple luminal-A tumors were not represented, and only partial conservation of subtype-specific CNAs. We identified 80 high-level DNA amplifications and 13 multi-copy deletions, and the resident genes with concomitantly altered gene-expression, highlighting known and

  5. Modelling Inter-relationships among water, governance, human development variables in developing countries with Bayesian networks.

    Science.gov (United States)

    Dondeynaz, C.; Lopez-Puga, J.; Carmona-Moreno, C.

    2012-04-01

    Improving Water and Sanitation Services (WSS), being a complex and interdisciplinary issue, passes through collaboration and coordination of different sectors (environment, health, economic activities, governance, and international cooperation). This inter-dependency has been recognised with the adoption of the "Integrated Water Resources Management" principles that push for the integration of these various dimensions involved in WSS delivery to ensure an efficient and sustainable management. The understanding of these interrelations appears as crucial for decision makers in the water sector in particular in developing countries where WSS still represent an important leverage for livelihood improvement. In this framework, the Joint Research Centre of the European Commission has developed a coherent database (WatSan4Dev database) containing 29 indicators from environmental, socio-economic, governance and financial aid flows data focusing on developing countries (Celine et al, 2011 under publication). The aim of this work is to model the WatSan4Dev dataset using probabilistic models to identify the key variables influencing or being influenced by the water supply and sanitation access levels. Bayesian Network Models are suitable to map the conditional dependencies between variables and also allows ordering variables by level of influence on the dependent variable. Separated models have been built for water supply and for sanitation because of different behaviour. The models are validated if complying with statistical criteria but either with scientific knowledge and literature. A two steps approach has been adopted to build the structure of the model; Bayesian network is first built for each thematic cluster of variables (e.g governance, agricultural pressure, or human development) keeping a detailed level for interpretation later one. A global model is then built based on significant indicators of each cluster being previously modelled. The structure of the

  6. Higher-dimensional cosmological model with variable gravitational constant and bulk viscosity in Lyra geometry

    Indian Academy of Sciences (India)

    G P Singh; R V Deshpande; T Singh

    2004-11-01

    We have studied five-dimensional homogeneous cosmological models with variable and bulk viscosity in Lyra geometry. Exact solutions for the field equations have been obtained and physical properties of the models are discussed. It has been observed that the results of new models are well within the observational limit.

  7. Bi-Modal Authentication in Mobile Environments Using Session Variability Modelling

    OpenAIRE

    Motlicek, Petr; El Shafey, Laurent; Wallace, Roy; McCool, Chris; Marcel, Sébastien

    2012-01-01

    We present a state-of-the-art bi-modal authentication system for mobile environments, using session variability modelling. We examine inter-session variability modelling (ISV) and joint factor analysis (JFA) for both face and speaker authentication and evaluate our system on the largest bi-modal mobile authentication database available, the MOBIO database, with over 61 hours of audio-visual data captured by 150 people in uncontrolled environments on a mobile phone. Our system achieves 2.6% an...

  8. Sparse Modeling of Landmark and Texture Variability using the Orthomax Criterion

    DEFF Research Database (Denmark)

    Stegmann, Mikkel Bille; Sjöstrand, Karl; Larsen, Rasmus

    2006-01-01

    In the past decade, statistical shape modeling has been widely popularized in the medical image analysis community. Predominantly, principal component analysis (PCA) has been employed to model biological shape variability. Here, a reparameterization with orthogonal basis vectors is obtained...... and disease characterization. This paper explores the orthomax class of statistical methods for transforming variable loadings into a \\$\\backslash\\$textit{simple structure} which is more easily interpreted by favoring sparsity. Further, we introduce these transformations into a particular framework...

  9. Potent, selective inhibitors of fibroblast growth factor receptor define fibroblast growth factor dependence in preclinical cancer models.

    Science.gov (United States)

    Squires, Matthew; Ward, George; Saxty, Gordan; Berdini, Valerio; Cleasby, Anne; King, Peter; Angibaud, Patrick; Perera, Tim; Fazal, Lynsey; Ross, Douglas; Jones, Charlotte Griffiths; Madin, Andrew; Benning, Rajdeep K; Vickerstaffe, Emma; O'Brien, Alistair; Frederickson, Martyn; Reader, Michael; Hamlett, Christopher; Batey, Michael A; Rich, Sharna; Carr, Maria; Miller, Darcey; Feltell, Ruth; Thiru, Abarna; Bethell, Susanne; Devine, Lindsay A; Graham, Brent L; Pike, Andrew; Cosme, Jose; Lewis, Edward J; Freyne, Eddy; Lyons, John; Irving, Julie; Murray, Christopher; Newell, David R; Thompson, Neil T

    2011-09-01

    We describe here the identification and characterization of 2 novel inhibitors of the fibroblast growth factor receptor (FGFR) family of receptor tyrosine kinases. The compounds exhibit selective inhibition of FGFR over the closely related VEGFR2 receptor in cell lines and in vivo. The pharmacologic profile of these inhibitors was defined using a panel of human tumor cell lines characterized for specific mutations, amplifications, or translocations known to activate one of the four FGFR receptor isoforms. This pharmacology defines a profile for inhibitors that are likely to be of use in clinical settings in disease types where FGFR is shown to play an important role.

  10. Comparing rainfall variability, model complexity and hydrological response at the intra-event scale

    Science.gov (United States)

    Cristiano, Elena; ten Veldhuis, Marie-claire; Ochoa-Rodriguez, Susana; van de Giesen, Nick

    2017-04-01

    The high variability in space and time of rainfall is one of the main aspects that influence hydrological response and generation of pluvial flooding. This phenomenon has a bigger impact in urban areas, where response is usually faster and flow peaks are typically higher, due to the high degree of imperviousness. Previous researchers have investigated sensitivity of urban hydrodynamic models to rainfall space-time resolution as well as interactions with model structure and resolution. They showed that finding a proper match between rainfall resolution and model complexity is important and that sensitivity increases for smaller urban catchment scales. Results also showed high variability in hydrological response sensitivity, the origins of which remain poorly understood. In this work, we investigate the interaction between rainfall input variability and model structure and scale at high resolution, i.e. 1-15 minutes in time and 100m to 3 km in space. Apart from studying summary statistics such as relative peak flow errors and coefficient of determination, we look into characteristics of response hydrographs to find explanations for response variability in relation to catchment properties as well storm event characteristics (e.g. storm scale and movement, single-peak versus multi-peak events). The aim is to identify general relations between storm temporal and spatial scale and catchment scale in explaining variability of hydrological response. Analyses are conducted for the Cranbrook catchment (London, UK), using 3 hydrodynamic models set up in InfoWorks ICM: a low resolution semi-distributed (SD1) model, a high resolution semi-distributed (SD2) model and a fully distributed (FD) model. These models represent the spatial variability of the land in different ways: semi-distributed models divide the surface in subcatchments, each of them modelled in a lumped way (51 subcatchment for the S model and 4409 subcatchments for the SD model), while the fully distributed

  11. Hyperspectral unmixing with spectral variability using a perturbed linear mixing model

    CERN Document Server

    Thouvenin, Pierre-Antoine; Tourneret, Jean-Yves

    2015-01-01

    Given a mixed hyperspectral data set, linear unmixing aims at estimating the reference spectral signatures composing the data - referred to as endmembers - their abundance fractions and their number. In practice, the identified endmembers can vary spectrally within a given image and can thus be construed as variable instances of reference endmembers. Ignoring this variability induces estimation errors that are propagated into the unmixing procedure. To address this issue, endmember variability estimation consists of estimating the reference spectral signatures from which the estimated endmembers have been derived as well as their variability with respect to these references. This paper introduces a new linear mixing model that explicitly accounts for spatial and spectral endmember variabilities. The parameters of this model can be estimated using an optimization algorithm based on the alternating direction method of multipliers. The performance of the proposed unmixing method is evaluated on synthetic and rea...

  12. The spread amongst ENSEMBLES regional scenarios: regional climate models, driving general circulation models and interannual variability

    Energy Technology Data Exchange (ETDEWEB)

    Deque, M.; Somot, S. [Meteo-France, Centre National de Recherches Meteorologiques, CNRS/GAME, Toulouse Cedex 01 (France); Sanchez-Gomez, E. [Cerfacs/CNRS, SUC URA1875, Toulouse Cedex 01 (France); Goodess, C.M. [University of East Anglia, Climatic Research Unit, Norwich (United Kingdom); Jacob, D. [Max Planck Institute for Meteorology, Hamburg (Germany); Lenderink, G. [KNMI, Postbus 201, De Bilt (Netherlands); Christensen, O.B. [Danish Meteorological Institute, Copenhagen Oe (Denmark)

    2012-03-15

    Various combinations of thirteen regional climate models (RCM) and six general circulation models (GCM) were used in FP6-ENSEMBLES. The response to the SRES-A1B greenhouse gas concentration scenario over Europe, calculated as the difference between the 2021-2050 and the 1961-1990 means can be viewed as an expected value about which various uncertainties exist. Uncertainties are measured here by variance explained for temperature and precipitation changes over eight European sub-areas. Three sources of uncertainty can be evaluated from the ENSEMBLES database. Sampling uncertainty is due to the fact that the model climate is estimated as an average over a finite number of years (30) despite a non-negligible interannual variability. Regional model uncertainty is due to the fact that the RCMs use different techniques to discretize the equations and to represent sub-grid effects. Global model uncertainty is due to the fact that the RCMs have been driven by different GCMs. Two methods are presented to fill the many empty cells of the ENSEMBLES RCM x GCM matrix. The first one is based on the same approach as in FP5-PRUDENCE. The second one uses the concept of weather regimes to attempt to separate the contribution of the GCM and the RCM. The variance of the climate response is analyzed with respect to the contribution of the GCM and the RCM. The two filling methods agree that the main contributor to the spread is the choice of the GCM, except for summer precipitation where the choice of the RCM dominates the uncertainty. Of course the implication of the GCM to the spread varies with the region, being maximum in the South-western part of Europe, whereas the continental parts are more sensitive to the choice of the RCM. The third cause of spread is systematically the interannual variability. The total uncertainty about temperature is not large enough to mask the 2021-2050 response which shows a similar pattern to the one obtained for 2071-2100 in PRUDENCE. The uncertainty

  13. Combined model of intrinsic and extrinsic variability for computational network design with application to synthetic biology.

    Directory of Open Access Journals (Sweden)

    Tina Toni

    Full Text Available Biological systems are inherently variable, with their dynamics influenced by intrinsic and extrinsic sources. These systems are often only partially characterized, with large uncertainties about specific sources of extrinsic variability and biochemical properties. Moreover, it is not yet well understood how different sources of variability combine and affect biological systems in concert. To successfully design biomedical therapies or synthetic circuits with robust performance, it is crucial to account for uncertainty and effects of variability. Here we introduce an efficient modeling and simulation framework to study systems that are simultaneously subject to multiple sources of variability, and apply it to make design decisions on small genetic networks that play a role of basic design elements of synthetic circuits. Specifically, the framework was used to explore the effect of transcriptional and post-transcriptional autoregulation on fluctuations in protein expression in simple genetic networks. We found that autoregulation could either suppress or increase the output variability, depending on specific noise sources and network parameters. We showed that transcriptional autoregulation was more successful than post-transcriptional in suppressing variability across a wide range of intrinsic and extrinsic magnitudes and sources. We derived the following design principles to guide the design of circuits that best suppress variability: (i high protein cooperativity and low miRNA cooperativity, (ii imperfect complementarity between miRNA and mRNA was preferred to perfect complementarity, and (iii correlated expression of mRNA and miRNA--for example, on the same transcript--was best for suppression of protein variability. Results further showed that correlations in kinetic parameters between cells affected the ability to suppress variability, and that variability in transient states did not necessarily follow the same principles as variability in

  14. Hidden-variable models for the spin singlet: I. Non-local theories reproducing quantum mechanics

    CERN Document Server

    Di Lorenzo, Antonio

    2011-01-01

    A non-local hidden variable model reproducing the quantum mechanical probabilities for a spin singlet is presented. The non-locality is concentrated in the distribution of the hidden variables. The model otherwise satisfies both the hypothesis of outcome independence, made in the derivation of Bell inequality, and of compliance with Malus's law, made in the derivation of Leggett inequality. It is shown through the prescription of a protocol that the non-locality can be exploited to send information instantaneously provided that the hidden variables can be measured, even though they cannot be controlled.

  15. Model Predictive Control of a Nonlinear System with Known Scheduling Variable

    DEFF Research Database (Denmark)

    Mirzaei, Mahmood; Poulsen, Niels Kjølstad; Niemann, Hans Henrik

    2012-01-01

    the control problem of the nonlinear system is simplied into a quadratic programming. Wind turbine is chosen as the case study and we choose wind speed as the scheduling variable. Wind speed is measurable ahead of the turbine, therefore the scheduling variable is known for the entire prediction horizon.......Model predictive control (MPC) of a class of nonlinear systems is considered in this paper. We will use Linear Parameter Varying (LPV) model of the nonlinear system. By taking the advantage of having future values of the scheduling variable, we will simplify state prediction. Consequently...

  16. Comparison of climate envelope models developed using expert-selected variables versus statistical selection

    Science.gov (United States)

    Brandt, Laura A.; Benscoter, Allison; Harvey, Rebecca G.; Speroterra, Carolina; Bucklin, David N.; Romanach, Stephanie; Watling, James I.; Mazzotti, Frank J.

    2017-01-01

    Climate envelope models are widely used to describe potential future distribution of species under different climate change scenarios. It is broadly recognized that there are both strengths and limitations to using climate envelope models and that outcomes are sensitive to initial assumptions, inputs, and modeling methods Selection of predictor variables, a central step in modeling, is one of the areas where different techniques can yield varying results. Selection of climate variables to use as predictors is often done using statistical approaches that develop correlations between occurrences and climate data. These approaches have received criticism in that they rely on the statistical properties of the data rather than directly incorporating biological information about species responses to temperature and precipitation. We evaluated and compared models and prediction maps for 15 threatened or endangered species in Florida based on two variable selection techniques: expert opinion and a statistical method. We compared model performance between these two approaches for contemporary predictions, and the spatial correlation, spatial overlap and area predicted for contemporary and future climate predictions. In general, experts identified more variables as being important than the statistical method and there was low overlap in the variable sets (models had high performance metrics (>0.9 for area under the curve (AUC) and >0.7 for true skill statistic (TSS). Spatial overlap, which compares the spatial configuration between maps constructed using the different variable selection techniques, was only moderate overall (about 60%), with a great deal of variability across species. Difference in spatial overlap was even greater under future climate projections, indicating additional divergence of model outputs from different variable selection techniques. Our work is in agreement with other studies which have found that for broad-scale species distribution modeling, using

  17. The interannual variability of Africa's ecosystem productivity: a multi-model analysis

    Directory of Open Access Journals (Sweden)

    U. Weber

    2009-02-01

    Full Text Available We are comparing spatially explicit process-model based estimates of the terrestrial carbon balance and its components over Africa and confront them with remote sensing based proxies of vegetation productivity and atmospheric inversions of land-atmosphere net carbon exchange. Particular emphasis is on characterizing the patterns of interannual variability of carbon fluxes and analyzing the factors and processes responsible for it. For this purpose simulations with the terrestrial biosphere models ORCHIDEE, LPJ-DGVM, LPJ-Guess and JULES have been performed using a standardized modeling protocol and a uniform set of corrected climate forcing data.

    While the models differ concerning the absolute magnitude of carbon fluxes, we find several robust patterns of interannual variability among the models. Models exhibit largest interannual variability in southern and eastern Africa, regions which are primarily covered by herbaceous vegetation. Interannual variability of the net carbon balance appears to be more strongly influenced by gross primary production than by ecosystem respiration. A principal component analysis indicates that moisture is the main driving factor of interannual gross primary production variability for those regions. On the contrary in a large part of the inner tropics radiation appears to be limiting in two models. These patterns are partly corroborated by remotely sensed vegetation properties from the SeaWiFS satellite sensor. Inverse atmospheric modeling estimates of surface carbon fluxes are less conclusive at this point, implying the need for a denser network of observation stations over Africa.

  18. A Model of Twice-Exceptionality: Explaining and Defining the Apparent Paradoxical Combination of Disability and Giftedness in Childhood

    Science.gov (United States)

    Ronksley-Pavia, Michelle

    2015-01-01

    The literature on twice-exceptionality suggests one of the main problems facing twice-exceptional children is that there is no consensus on the definition of the terms "disability" or "giftedness" and, consequently, the term "twice-exceptional". Endeavoring to define these specific terms loops back on itself to…

  19. A Model of Twice-Exceptionality: Explaining and Defining the Apparent Paradoxical Combination of Disability and Giftedness in Childhood

    Science.gov (United States)

    Ronksley-Pavia, Michelle

    2015-01-01

    The literature on twice-exceptionality suggests one of the main problems facing twice-exceptional children is that there is no consensus on the definition of the terms "disability" or "giftedness" and, consequently, the term "twice-exceptional". Endeavoring to define these specific terms loops back on itself to…

  20. Evaluation of Stochastic Rainfall Models in Capturing Climate Variability for Future Drought and Flood Risk Assessment

    Science.gov (United States)

    Chowdhury, A. F. M. K.; Lockart, N.; Willgoose, G. R.; Kuczera, G. A.; Kiem, A.; Nadeeka, P. M.

    2016-12-01

    One of the key objectives of stochastic rainfall modelling is to capture the full variability of climate system for future drought and flood risk assessment. However, it is not clear how well these models can capture the future climate variability when they are calibrated to Global/Regional Climate Model data (GCM/RCM) as these datasets are usually available for very short future period/s (e.g. 20 years). This study has assessed the ability of two stochastic daily rainfall models to capture climate variability by calibrating them to a dynamically downscaled RCM dataset in an east Australian catchment for 1990-2010, 2020-2040, and 2060-2080 epochs. The two stochastic models are: (1) a hierarchical Markov Chain (MC) model, which we developed in a previous study and (2) a semi-parametric MC model developed by Mehrotra and Sharma (2007). Our hierarchical model uses stochastic parameters of MC and Gamma distribution, while the semi-parametric model uses a modified MC process with memory of past periods and kernel density estimation. This study has generated multiple realizations of rainfall series by using parameters of each model calibrated to the RCM dataset for each epoch. The generated rainfall series are used to generate synthetic streamflow by using a SimHyd hydrology model. Assessing the synthetic rainfall and streamflow series, this study has found that both stochastic models can incorporate a range of variability in rainfall as well as streamflow generation for both current and future periods. However, the hierarchical model tends to overestimate the multiyear variability of wet spell lengths (therefore, is less likely to simulate long periods of drought and flood), while the semi-parametric model tends to overestimate the mean annual rainfall depths and streamflow volumes (hence, simulated droughts are likely to be less severe). Sensitivity of these limitations of both stochastic models in terms of future drought and flood risk assessment will be discussed.

  1. Variable-mass Thermodynamics Calculation Model for Gas-operated Automatic Weapon%Variable-mass Thermodynamics Calculation Model for Gas-operated Automatic Weapon

    Institute of Scientific and Technical Information of China (English)

    陈建彬; 吕小强

    2011-01-01

    Aiming at the fact that the energy and mass exchange phenomena exist between barrel and gas-operated device of the automatic weapon, for describing its interior ballistics and dynamic characteristics of the gas-operated device accurately, a new variable-mass thermodynamics model is built. It is used to calculate the automatic mechanism velocity of a certain automatic weapon, the calculation results coincide with the experimental results better, and thus the model is validated. The influences of structure parameters on gas-operated device' s dynamic characteristics are discussed. It shows that the model is valuable for design and accurate performance prediction of gas-operated automatic weapon.

  2. Impact of 18-fluorodeoxyglucose positron emission tomography on computed tomography defined target volumes in radiation treatment planning of esophageal cancer: reduction in geographic misses with equal inter-observer variability: PET/CT improves esophageal target definition.

    Science.gov (United States)

    Schreurs, L M A; Busz, D M; Paardekooper, G M R M; Beukema, J C; Jager, P L; Van der Jagt, E J; van Dam, G M; Groen, H; Plukker, J Th M; Langendijk, J A

    2010-08-01

    Target volume definition in modern radiotherapy is based on planning computed tomography (CT). So far, 18-fluorodeoxyglucose positron emission tomography (FDG-PET) has not been included in planning modality in volume definition of esophageal cancer. This study evaluates fusion of FDG-PET and CT in patients with esophageal cancer in terms of geographic misses and inter-observer variability in volume definition. In 28 esophageal cancer patients, gross, clinical and planning tumor volumes (GTV; CTV; PTV) were defined on planning CT by three radiation oncologists. After software-based emission tomography and computed tomography (PET/CT) fusion, tumor delineations were redefined by the same radiation-oncologists. Concordance indexes (CCI's) for CT and PET/CT based GTV, CTV and PTV were calculated for each pair of observers. Incorporation of PET/CT modified tumor delineation in 17/28 subjects (61%) in cranial and/or caudal direction. Mean concordance indexes for CT-based CTV and PTV were 72 (55-86)% and 77 (61-88)%, respectively, vs. 72 (47-99)% and 76 (54-87)% for PET/CT-based CTV and PTV. Paired analyses showed no significant difference in CCI between CT and PET/CT. Combining FDG-PET and CT may improve target volume definition with less geographic misses, but without significant effects on inter-observer variability in esophageal cancer.

  3. Dimension reduction of decision variables for multireservoir operation: A spectral optimization model

    Science.gov (United States)

    Chen, Duan; Leon, Arturo S.; Gibson, Nathan L.; Hosseini, Parnian

    2016-01-01

    Optimizing the operation of a multireservoir system is challenging due to the high dimension of the decision variables that lead to a large and complex search space. A spectral optimization model (SOM), which transforms the decision variables from time domain to frequency domain, is proposed to reduce the dimensionality. The SOM couples a spectral dimensionality-reduction method called Karhunen-Loeve (KL) expansion within the routine of Nondominated Sorting Genetic Algorithm (NSGA-II). The KL expansion is used to represent the decision variables as a series of terms that are deterministic orthogonal functions with undetermined coefficients. The KL expansion can be truncated into fewer significant terms, and consequently, fewer coefficients by a predetermined number. During optimization, operators of the NSGA-II (e.g., crossover) are conducted only on the coefficients of the KL expansion rather than the large number of decision variables, significantly reducing the search space. The SOM is applied to the short-term operation of a 10-reservoir system in the Columbia River of the United States. Two scenarios are considered herein, the first with 140 decision variables and the second with 3360 decision variables. The hypervolume index is used to evaluate the optimization performance in terms of convergence and diversity. The evaluation of optimization performance is conducted for both conventional optimization model (i.e., NSGA-II without KL) and the SOM with different number of KL terms. The results show that the number of decision variables can be greatly reduced in the SOM to achieve a similar or better performance compared to the conventional optimization model. For the scenario with 140 decision variables, the optimal performance of the SOM model is found with six KL terms. For the scenario with 3360 decision variables, the optimal performance of the SOM model is obtained with 11 KL terms.

  4. 可变精度Rough集模型%Variable Precision Rough Set Model

    Institute of Scientific and Technical Information of China (English)

    丁国栋; 江娟

    2001-01-01

    Pawlak rough set model is briefly described in advance.Afterwards variable precision rough set model is proposed. Based on the notion of Rough membership functions, particularly describes the variable precision rough set model and the rough set model based on decision theory.%首先简要描述了Pawlak的Rough集模型,然后,提出可变精度Rough集模型。在定义Rough隶属函数概念的基础上,详细论述了可变精度的Rough集模型和基于决策理论的Rough集模型。

  5. A comparison between latent variable models for evaluating the quality perceived from the hospital service users

    Directory of Open Access Journals (Sweden)

    Silvia Cagnone

    2007-10-01

    Full Text Available During the last years, costumer satisfaction analysis is becoming more and more important in evaluating the service quality of the sanitary system. Typically the construct ‘satisfaction’ is assumed to be a not observable variable, that is a latent variable. In this paper we illustrate and compare two different methods for analyzing latent variable models. The first one is the structural equation models with Lisrel, the second one is the generalized linear latent variable models. The comparison is performed through an application of the satisfaction analysis to a real data set referred to the patients of a hospital in Bologna. The results highlights the methodological and the applicative similarity and dissimilarity between the two methods.

  6. The Impacts of the Interannual Variability of Vegetation on the Interannual Variability of Global Evapotranspiration: A Modeling Study

    Institute of Scientific and Technical Information of China (English)

    CHEN Hao; ZENG Xiao-Dong

    2012-01-01

    The impact of the interannual variability (IAV) of vegetation on the IAV of evapotranspiration is investigated with the Community Land Model (CLM3.0) and modified Dynamic Global Vegetation Model (DGVM). Two sets of 50-year off-line simulations are used in this study. The simulations begin with the same initial surface-water and heat states and are driven by the same atmospheric forcing data. The vegetation exhibits interannual variability in one simulation but not in the other simulation. However, the climatological means for the vegetation are the same. The IAV of the 50-year annual total evapotranspiration and its three partitions (ground evaporation, canopy evaporation, and transpiration) are analyzed. The global distribution of the evapotranspiration IAV and the statistics of evapotranspiration and its components in different ecosystems show that the IAV of ground evaporation is generally large in areas dominated by grass and deciduous trees, whereas the IAV of canopy evaporation and transpiration is large in areas dominated by bare soil and shrubs. For ground evaporation, canopy evaporation, and transpiration, the changes in IAV are larger than the mean state over most grasslands and shrublands. The study of two sites with the same IAV in the leaf area index (LAI) shows that the component with the smaller contribution to the total evapotranspiration is more sensitive to the IAV of vegetation. The IAV of the three components of evapotranspiration increases with the IAV of the fractional coverage (FC) and the LAI. The ground evaporation IAV shows the greatest increase, whereas the canopy evaporation shows the smallest increase.

  7. Boundary-layer turbulent processes and mesoscale variability represented by numerical weather prediction models during the BLLAST campaign

    Science.gov (United States)

    Couvreux, Fleur; Bazile, Eric; Canut, Guylaine; Seity, Yann; Lothon, Marie; Lohou, Fabienne; Guichard, Françoise; Nilsson, Erik

    2016-07-01

    This study evaluates the ability of three operational models, with resolution varying from 2.5 to 16 km, to predict the boundary-layer turbulent processes and mesoscale variability observed during the Boundary Layer Late-Afternoon and Sunset Turbulence (BLLAST) field campaign. We analyse the representation of the vertical profiles of temperature and humidity and the time evolution of near-surface atmospheric variables and the radiative and turbulent fluxes over a total of 12 intensive observing periods (IOPs), each lasting 24 h. Special attention is paid to the evolution of the turbulent kinetic energy (TKE), which was sampled by a combination of independent instruments. For the first time, this variable, a central one in the turbulence scheme used in AROME and ARPEGE, is evaluated with observations.In general, the 24 h forecasts succeed in reproducing the variability from one day to another in terms of cloud cover, temperature and boundary-layer depth. However, they exhibit some systematic biases, in particular a cold bias within the daytime boundary layer for all models. An overestimation of the sensible heat flux is noted for two points in ARPEGE and is found to be partly related to an inaccurate simplification of surface characteristics. AROME shows a moist bias within the daytime boundary layer, which is consistent with overestimated latent heat fluxes. ECMWF presents a dry bias at 2 m above the surface and also overestimates the sensible heat flux. The high-resolution model AROME resolves the vertical structures better, in particular the strong daytime inversion and the thin evening stable boundary layer. This model is also able to capture some specific observed features, such as the orographically driven subsidence and a well-defined maximum that arises during the evening of the water vapour mixing ratio in the upper part of the residual layer due to fine-scale advection. The model reproduces the order of magnitude of spatial variability observed at

  8. Study of seasonal climatology and interannual variability over India and its subregions using a regional climate model (RegCM3)

    Science.gov (United States)

    Maharana, P.; Dimri, A. P.

    2014-06-01

    The temporal and spatial variability of the various meteorological parameters over India and its different subregions is high. The Indian subcontinent is surrounded by the complex Himalayan topography in north and the vast oceans in the east, west and south. Such distributions have dominant influence over its climate and thus make the study more complex and challenging. In the present study, the climatology and interannual variability of basic meteorological fields over India and its six homogeneous monsoon subregions (as defined by Indian Institute of Tropical Meteorology (IITM) for all the four meteorological seasons) are analysed using the Regional Climate Model Version 3 (RegCM3). A 22-year (1980-2001) simulation with RegCM3 is carried out to develop such understanding. The National Centre for Environmental Prediction/National Centre for Atmospheric Research, US (NCEP-NCAR) reanalysis 2 (NNRP2) is used as the initial and lateral boundary conditions. The main seasonal features and their variability are represented in model simulation. The temporal variation of precipitation, i.e., the mean annual cycle, is captured over complete India and its homogenous monsoon subregions. The model captured the contribution of seasonal precipitation to the total annual precipitation over India. The model showed variation in the precipitation contribution for some subregions to the total and seasonal precipitation over India. The correlation coefficient (CC) and difference between the coefficient of variation between model fields and the corresponding observations in percentage (COV) is calculated and compared. In most of the cases, the model could represent the magnitude but not the variability. The model processes are found to be more important than in the corresponding observations defining the variability. The model performs quite well over India in capturing the climatology and the meteorological process. The model shows good skills over the relevant subregions during a

  9. Study of seasonal climatology and interannual variability over India and its subregions using a regional climate model (RegCM3)

    Indian Academy of Sciences (India)

    P Maharana; A P Dimri

    2014-07-01

    The temporal and spatial variability of the various meteorological parameters over India and its different subregions is high. The Indian subcontinent is surrounded by the complex Himalayan topography in north and the vast oceans in the east, west and south. Such distributions have dominant influence over its climate and thus make the study more complex and challenging. In the present study, the climatology and interannual variability of basic meteorological fields over India and its six homogeneous monsoon subregions (as defined by Indian Institute of Tropical Meteorology (IITM) for all the four meteorological seasons) are analysed using the Regional Climate Model Version 3 (RegCM3). A 22-year (1980–2001) simulation with RegCM3 is carried out to develop such understanding. The National Centre for Environmental Prediction/National Centre for Atmospheric Research, US (NCEP-NCAR) reanalysis 2 (NNRP2) is used as the initial and lateral boundary conditions. The main seasonal features and their variability are represented in model simulation. The temporal variation of precipitation, i.e., the mean annual cycle, is captured over complete India and its homogenous monsoon subregions. The model captured the contribution of seasonal precipitation to the total annual precipitation over India. The model showed variation in the precipitation contribution for some subregions to the total and seasonal precipitation over India. The correlation coefficient (CC) and difference between the coefficient of variation between model fields and the corresponding observations in percentage (COV) is calculated and compared. In most of the cases, the model could represent the magnitude but not the variability. The model processes are found to be more important than in the corresponding observations defining the variability. The model performs quite well over India in capturing the climatology and the meteorological process. The model shows good skills over the relevant subregions during a

  10. A simple model for the spatially-variable coastal response to hurricanes

    Science.gov (United States)

    Stockdon, H.F.; Sallenger, A.H.; Holman, R.A.; Howd, P.A.

    2007-01-01

    The vulnerability of a beach to extreme coastal change during a hurricane can be estimated by comparing the relative elevations of storm-induced water levels to those of the dune or berm. A simple model that defines the coastal response based on these elevations was used to hindcast the potential impact regime along a 50-km stretch of the North Carolina coast to the landfalls of Hurricane Bonnie on August 27, 1998, and Hurricane Floyd on September 16, 1999. Maximum total water levels at the shoreline were calculated as the sum of modeled storm surge, astronomical tide, and wave runup, estimated from offshore wave conditions and the local beach slope using an empirical parameterization. Storm surge and wave runup each accounted for ∼ 48% of the signal (the remaining 4% is attributed to astronomical tides), indicating that wave-driven process are a significant contributor to hurricane-induced water levels. Expected water levels and lidar-derived measures of pre-storm dune and berm elevation were used to predict the spatially-varying storm-impact regime: swash, collision, or overwash. Predictions were compared to the observed response quantified using a lidar topography survey collected following hurricane landfall. The storm-averaged mean accuracy of the model in predicting the observed impact regime was 55.4%, a significant improvement over the 33.3% accuracy associated with random chance. Model sensitivity varied between regimes and was highest within the overwash regime where the accuracies were 84.2% and 89.7% for Hurricanes Bonnie and Floyd, respectively. The model not only allows for prediction of the general coastal response to storms, but also provides a framework for examining the longshore-variable magnitudes of observed coastal change. For Hurricane Bonnie, shoreline and beach volume changes within locations that experienced overwash or dune erosion were two times greater than locations where wave runup was confined to the foreshore (swash regime

  11. 待遇确定和缴费确定模式下的企业年金精算模型研究%Research on Enterprise Annuity Fund in Defined Benefit and Defined Contribution Payment Models

    Institute of Scientific and Technical Information of China (English)

    洪娟

    2012-01-01

    Since 1970s, there has been a great upsurge of reform around the world in endowment insurance system and retirement system. Each country around the world has put forward the idea of setting up enterprise pension system and has put it into effect. Our country is no exception. In the firm pension system employed in China, there are mainly two models defined benefit and defined contribution payment models. In this paper, we introduce two types of occupational pension systems from the point of view of risk management: defined benefit (DB) plan and defined contribution (DC) plan. The differences between the two plans are examined. We uses quantitative analysis to construct two econometrics models, derives a reasonable substitution rate of pension asset, contribution rate of pension security, and establishes pension financial management and life-long financial analysis mode. By empirical analysis, we try to work out the mutual influence among government, business, beneficiaries, pension managers. In playing game, we will maximize the overall and individual efficiency and balance efficiency.%自二十世纪七十年代以来,在世界范围之内兴起了一场养老保险制度、退休制度改革的热潮,各国纷纷推出了建立企业年金制度的政策主张并得以贯彻实施,我国的养老保险制度改革也同样显示出这样的趋势.中国实行的是企业自愿设立年金的制度,主要有收益确定型(DB)和缴费确定型(DC)两种企业年金制度,本文用模型对DB和DC两种制度下年金给付的不确定性做了比较,指出了不同具体情况下两种企业年金制度的优劣,通过探讨企业年金给付的不同确定方式,待遇确定型计划和缴费确定型计划,运用数量分析,建立两种确定方式的精算模型,得出养老资产合理替代率,养老保障分担率,建立养老理财及终生财务分析模型;通过实证分析方法,探讨政府、企业、受益人、年金管理者在相互影响,以

  12. Incorporating Latent Variables into Discrete Choice Models - A Simultaneous Estimation Approach Using SEM Software

    Directory of Open Access Journals (Sweden)

    Dirk Temme

    2008-12-01

    Full Text Available Integrated choice and latent variable (ICLV models represent a promising new class of models which merge classic choice models with the structural equation approach (SEM for latent variables. Despite their conceptual appeal, applications of ICLV models in marketing remain rare. We extend previous ICLV applications by first estimating a multinomial choice model and, second, by estimating hierarchical relations between latent variables. An empirical study on travel mode choice clearly demonstrates the value of ICLV models to enhance the understanding of choice processes. In addition to the usually studied directly observable variables such as travel time, we show how abstract motivations such as power and hedonism as well as attitudes such as a desire for flexibility impact on travel mode choice. Furthermore, we show that it is possible to estimate such a complex ICLV model with the widely available structural equation modeling package Mplus. This finding is likely to encourage more widespread application of this appealing model class in the marketing field.

  13. Physical Ability-Task Performance Models: Assessing the Risk of Omitted Variable Bias

    Science.gov (United States)

    2008-09-15

    Variable Bias References Anderson, J. C., & Gerbing, D. W. (1988). Structural equation modeling in practice: A review and recommended two-step...Boomsma, A. (2000). Reporting analysis of covariance structures. Structural Equation Modeling , 7(3), 461-483. Browne, M. W., & Cudeck, R. (1993...Company. Fan, X., Thompson, B., & Wang, L. (1999). Effects of sample size, estimation methods, and model specification on structural equation modeling fit

  14. Beyond a Climate-Centric View of Plant Distribution: Edaphic Variables Add Value to Distribution Models

    Science.gov (United States)

    Beauregard, Frieda; de Blois, Sylvie

    2014-01-01

    Both climatic and edaphic conditions determine plant distribution, however many species distribution models do not include edaphic variables especially over large geographical extent. Using an exceptional database of vegetation plots (n = 4839) covering an extent of ∼55000 km2, we tested whether the inclusion of fine scale edaphic variables would improve model predictions of plant distribution compared to models using only climate predictors. We also tested how well these edaphic variables could predict distribution on their own, to evaluate the assumption that at large extents, distribution is governed largely by climate. We also hypothesized that the relative contribution of edaphic and climatic data would vary among species depending on their growth forms and biogeographical attributes within the study area. We modelled 128 native plant species from diverse taxa using four statistical model types and three sets of abiotic predictors: climate, edaphic, and edaphic-climate. Model predictive accuracy and variable importance were compared among these models and for species' characteristics describing growth form, range boundaries within the study area, and prevalence. For many species both the climate-only and edaphic-only models performed well, however the edaphic-climate models generally performed best. The three sets of predictors differed in the spatial information provided about habitat suitability, with climate models able to distinguish range edges, but edaphic models able to better distinguish within-range variation. Model predictive accuracy was generally lower for species without a range boundary within the study area and for common species, but these effects were buffered by including both edaphic and climatic predictors. The relative importance of edaphic and climatic variables varied with growth forms, with trees being more related to climate whereas lower growth forms were more related to edaphic conditions. Our study identifies the potential for

  15. Beyond a climate-centric view of plant distribution: edaphic variables add value to distribution models.

    Directory of Open Access Journals (Sweden)

    Frieda Beauregard

    Full Text Available Both climatic and edaphic conditions determine plant distribution, however many species distribution models do not include edaphic variables especially over large geographical extent. Using an exceptional database of vegetation plots (n = 4839 covering an extent of ∼55,000 km2, we tested whether the inclusion of fine scale edaphic variables would improve model predictions of plant distribution compared to models using only climate predictors. We also tested how well these edaphic variables could predict distribution on their own, to evaluate the assumption that at large extents, distribution is governed largely by climate. We also hypothesized that the relative contribution of edaphic and climatic data would vary among species depending on their growth forms and biogeographical attributes within the study area. We modelled 128 native plant species from diverse taxa using four statistical model types and three sets of abiotic predictors: climate, edaphic, and edaphic-climate. Model predictive accuracy and variable importance were compared among these models and for species' characteristics describing growth form, range boundaries within the study area, and prevalence. For many species both the climate-only and edaphic-only models performed well, however the edaphic-climate models generally performed best. The three sets of predictors differed in the spatial information provided about habitat suitability, with climate models able to distinguish range edges, but edaphic models able to better distinguish within-range variation. Model predictive accuracy was generally lower for species without a range boundary within the study area and for common species, but these effects were buffered by including both edaphic and climatic predictors. The relative importance of edaphic and climatic variables varied with growth forms, with trees being more related to climate whereas lower growth forms were more related to edaphic conditions. Our study

  16. Assessing geotechnical centrifuge modelling in addressing variably saturated flow in soil and fractured rock.

    Science.gov (United States)

    Jones, Brendon R; Brouwers, Luke B; Van Tonder, Warren D; Dippenaar, Matthys A

    2017-01-05

    The vadose zone typically comprises soil underlain by fractured rock. Often, surface water and groundwater parameters are readily available, but variably saturated flow through soil and rock are oversimplified or estimated as input for hydrological models. In this paper, a series of geotechnical centrifuge experiments are conducted to contribute to the knowledge gaps in: (i) variably saturated flow and dispersion in soil and (ii) variably saturated flow in discrete vertical and horizontal fractures. Findings from the research show that the hydraulic gradient, and not the hydraulic conductivity, is scaled for seepage flow in the geotechnical centrifuge. Furthermore, geotechnical centrifuge modelling has been proven as a viable experimental tool for the modelling of hydrodynamic dispersion as well as the replication of similar flow mechanisms for unsaturated fracture flow, as previously observed in literature. Despite the imminent challenges of modelling variable saturation in the vadose zone, the geotechnical centrifuge offers a powerful experimental tool to physically model and observe variably saturated flow. This can be used to give valuable insight into mechanisms associated with solid-fluid interaction problems under these conditions. Findings from future research can be used to validate current numerical modelling techniques and address the subsequent influence on aquifer recharge and vulnerability, contaminant transport, waste disposal, dam construction, slope stability and seepage into subsurface excavations.

  17. A radiobiological model of radiotherapy response and its correlation with prognostic imaging variables

    Science.gov (United States)

    Crispin-Ortuzar, Mireia; Jeong, Jeho; Fontanella, Andrew N.; Deasy, Joseph O.

    2017-04-01

    Radiobiological models of tumour control probability (TCP) can be personalized using imaging data. We propose an extension to a voxel-level radiobiological TCP model in order to describe patient-specific differences and intra-tumour heterogeneity. In the proposed model, tumour shrinkage is described by means of a novel kinetic Monte Carlo method for inter-voxel cell migration and tumour deformation. The model captures the spatiotemporal evolution of the tumour at the voxel level, and is designed to take imaging data as input. To test the performance of the model, three image-derived variables found to be predictive of outcome in the literature have been identified and calculated using the model’s own parameters. Simulating multiple tumours with different initial conditions makes it possible to perform an in silico study of the correlation of these variables with the dose for 50% tumour control (\\text{TC}{{\\text{D}}50} ) calculated by the model. We find that the three simulated variables correlate with the calculated \\text{TC}{{\\text{D}}50} . In addition, we find that different variables have different levels of sensitivity to the spatial distribution of hypoxia within the tumour, as well as to the dynamics of the migration mechanism. Finally, based on our results, we observe that an adequate combination of the variables may potentially result in higher predictive power.

  18. Latent variable indirect response modeling of categorical endpoints representing change from baseline.

    Science.gov (United States)

    Hu, Chuanpu; Xu, Zhenhua; Mendelsohn, Alan M; Zhou, Honghui

    2013-02-01

    Accurate exposure-response modeling is important in drug development. Methods are still evolving in the use of mechanistic, e.g., indirect response (IDR) models to relate discrete endpoints, mostly of the ordered categorical form, to placebo/co-medication effect and drug exposure. When the discrete endpoint is derived using change-from-baseline measurements, a mechanistic exposure-response modeling approach requires adjustment to maintain appropriate interpretation. This manuscript describes a new modeling method that integrates a latent-variable representation of IDR models with standard logistic regression. The new method also extends to general link functions that cover probit regression or continuous clinical endpoint modeling. Compared to an earlier latent variable approach that constrained the baseline probability of response to be 0, placebo effect parameters in the new model formulation are more readily interpretable and can be separately estimated from placebo data, thus allowing convenient and robust model estimation. A general inherent connection of some latent variable representations with baseline-normalized standard IDR models is derived. For describing clinical response endpoints, Type I and Type III IDR models are shown to be equivalent, therefore there are only three identifiable IDR models. This approach was applied to data from two phase III clinical trials of intravenously administered golimumab for the treatment of rheumatoid arthritis, where 20, 50, and 70% improvement in the American College of Rheumatology disease severity criteria were used as efficacy endpoints. Likelihood profiling and visual predictive checks showed reasonable parameter estimation precision and model performance.

  19. Analysis of Design Variables of Annular Linear Induction Electromagnetic Pump using an MHD Model

    Energy Technology Data Exchange (ETDEWEB)

    Kwak, Jae Sik; Kim, Hee Reyoung [Ulsan National Institute of Science and Technology, Ulsan (Korea, Republic of)

    2015-05-15

    The generated force is affected by lots of factors including electrical input, hydrodynamic flow, geometrical shape, and so on. These factors, which are the design variables of an ALIP, should be suitably analyzed to optimally design an ALIP. Analysis on the developed pressure and efficiency of the ALIP according to the change of design variables is required for the ALIP satisfying requirements. In this study, the design variables of the ALIP are analyzed by using ideal MHD analysis model. Electromagnetic force and efficiency are derived by analyzing the main design variables such as pump core length, inner core diameter, flow gap and turns of coils. The developed pressure and efficiency of the ALIP were derived and analyzed on the change of the main variables such as pump core length, inner core diameter, flow gap, and turns of coils of the ALIP.

  20. Internal and external North Atlantic Sector variability in the Kiel climate model

    Energy Technology Data Exchange (ETDEWEB)

    Latif, Mojib; Park, Wonsun; Ding, Hui; Keenlyside, Noel S. [Leibniz-Inst. fuer Meereswissenschaften, Kiel (Germany)

    2009-08-15

    The internal and external North Atlantic Sector variability is investigated by means of a multimillennial control run and forced experiments with the Kiel Climate Model (KCM). The internal variability is studied by analyzing the control run. The externally forced variability is investigated in a run with periodic millennial solar forcing and in greenhouse warming experiments with enhanced carbon dioxide concentrations. The surface air temperature (SAT) averaged over the Northern Hemisphere simulated in the control run displays enhanced variability relative to the red background at decadal, centennial, and millennial timescales. Special emphasis is given to the variability of the Meridional Overturning Circulation (MOC). The MOC plays an important role in the generation of internal climate modes. Furthermore, the MOC provides a strong negative feedback on the Northern Hemisphere SAT in both the solar and greenhouse warming experiments, thereby moderating the direct effects of the external forcing in the North Atlantic. The implications of the results for decadal predictability are discussed. (orig.)