#### Sample records for models variables defined

1. Defining a Family of Cognitive Diagnosis Models Using Log-Linear Models with Latent Variables

Science.gov (United States)

Henson, Robert A.; Templin, Jonathan L.; Willse, John T.

2009-01-01

This paper uses log-linear models with latent variables (Hagenaars, in "Loglinear Models with Latent Variables," 1993) to define a family of cognitive diagnosis models. In doing so, the relationship between many common models is explicitly defined and discussed. In addition, because the log-linear model with latent variables is a general model for…

2. Mathematical Model Defining Volumetric Losses of Hydraulic Oil Compression in a Variable Capacity Displacement Pump

Directory of Open Access Journals (Sweden)

Paszota Zygmunt

2015-01-01

Full Text Available The objective of the work is to develop the capability of evaluating the volumetric losses of hydraulic oil compression in the working chambers of high pressure variable capacity displacement pump. Volumetric losses of oil compression must be determined as functions of the same parameters, which the volumetric losses due to leakage, resulting from the quality of design solution of the pump, are evaluated as dependent on and also as function of the oil aeration coefficient Ɛ. A mathematical model has been developed describing the hydraulic oil compressibility coefficient klc|Δppi;Ɛ;v as a relation to the ratio ΔpPi/pn of indicated increase ΔpPi of pressure in the working chambers and the nominal pressure pn, to the pump capacity coefficient bP, to the oil aeration coefficient  and to the ratio v/vnof oil viscosity v and reference viscosity vn. A mathematical model is presented of volumetric losses qpvc|ΔpPi;bp;;vof hydraulic oil compression in the pump working chambers in the form allowing to use it in the model of power of losses and energy efficiency

3. Defining fitness in evolutionary models

2008-12-23

Dec 23, 2008 ... The analysis of evolutionary models requires an appropriate definition for fitness. ..... of dimorphism for dormancy in plants (Cohen 1966). .... yses have assumed nonoverlapping generations (i.e. no age- structure). The solution to defining fitness when the environ- ment is spatially variable and there is a ...

4. Variable Bandwidth Analog Channel Filters for Software Defined Radio

NARCIS (Netherlands)

Arkesteijn, V.J.; Klumperink, Eric A.M.; Nauta, Bram

2001-01-01

An important aspect of Software Defined Radio is the ability to define the bandwidth of the filter that selects the desired channel. This paper first explains the importance of channel filtering. Then the advantage of analog channel filtering with a variable bandwidth in a Software Defined Radio is

5. Generalized instrumental variable models

OpenAIRE

2014-01-01

This paper develops characterizations of identified sets of structures and structural features for complete and incomplete models involving continuous or discrete variables. Multiple values of unobserved variables can be associated with particular combinations of observed variables. This can arise when there are multiple sources of heterogeneity, censored or discrete endogenous variables, or inequality restrictions on functions of observed and unobserved variables. The models g...

6. Classifying variability modeling techniques

NARCIS (Netherlands)

Sinnema, Marco; Deelstra, Sybren

Variability modeling is important for managing variability in software product families, especially during product derivation. In the past few years, several variability modeling techniques have been developed, each using its own concepts to model the variability provided by a product family. The

7. Defining fitness in evolutionary models

jgen/087/04/0339-0348. Keywords. fitness; invasion exponent; adaptive dynamics; game theory; Lyapunov exponent; invasibility; Malthusian parameter. Abstract. The analysis of evolutionary models requires an appropriate definition for fitness.

8. Variable importance in latent variable regression models

NARCIS (Netherlands)

Kvalheim, O.M.; Arneberg, R.; Bleie, O.; Rajalahti, T.; Smilde, A.K.; Westerhuis, J.A.

2014-01-01

The quality and practical usefulness of a regression model are a function of both interpretability and prediction performance. This work presents some new graphical tools for improved interpretation of latent variable regression models that can also assist in improved algorithms for variable

9. A Core Language for Separate Variability Modeling

DEFF Research Database (Denmark)

Iosif-Lazăr, Alexandru Florin; Wasowski, Andrzej; Schaefer, Ina

2014-01-01

Separate variability modeling adds variability to a modeling language without requiring modifications of the language or the supporting tools. We define a core language for separate variability modeling using a single kind of variation point to define transformations of software artifacts in object...... models. Our language, Featherweight VML, has several distinctive features. Its architecture and operations are inspired by the recently proposed Common Variability Language (CVL). Its semantics is considerably simpler than that of CVL, while remaining confluent (unlike CVL). We simplify complex......, which makes it suitable to serve as a specification for implementations of trustworthy variant derivation. Featherweight VML offers insights in the execution of other variability modeling languages such as the Orthogonal Variability Model and Delta Modeling. To the best of our knowledge...

10. Modeling Shared Variables in VHDL

DEFF Research Database (Denmark)

1994-01-01

A set of concurrent processes communicating through shared variables is an often used model for hardware systems. This paper presents three modeling techniques for representing such shared variables in VHDL, depending on the acceptable constraints on accesses to the variables. Also a set of guide......A set of concurrent processes communicating through shared variables is an often used model for hardware systems. This paper presents three modeling techniques for representing such shared variables in VHDL, depending on the acceptable constraints on accesses to the variables. Also a set...

11. High-Q Variable Bandwidth Passive Filters for Software Defined Radio

NARCIS (Netherlands)

Arkesteijn, V.J.; Klumperink, Eric A.M.; Nauta, Bram

2001-01-01

An important aspect of Software Defined Radio is the ability to define the bandwidth of the filter that selects the desired channel. This paper describes a technique for channel filtering, in which two passive filters are combined to obtain a variable bandwidth. Passive filters have the advantage of

12. High-Q variable bandwidth passive filters for Software Defined Radio

NARCIS (Netherlands)

Arkesteijn, V.J.; Klumperink, Eric A.M.; Nauta, Bram

An important aspect of Software Defined Radio is the ability to define the bandwidth of the filter that selects the desired channel. This paper describes a technique for channel filtering, in which two passive filters are combined to obtain a variable bandwidth. Passive filters have the advantage of

13. Business Model Exploration for Software Defined Networks

NARCIS (Netherlands)

Xu, Yudi; Jansen, Slinger; España, Sergio; Zhang, Dong; Gao, Xuesong

2017-01-01

Business modeling is becoming a foundational process in the information technology industry. Many ICT companies are constructing their business models to stay competitive on the cutting edge of the technology world. However, when comes to new technologies or emerging markets, it remains difficult

14. Stability of phenotypes defined by physiological variables and biomarkers in adults with asthma

NARCIS (Netherlands)

Kupczyk, M.; Dahlén, B.; Sterk, P. J.; Nizankowska-Mogilnicka, E.; Papi, A.; Bel, E. H.; Chanez, P.; Howarth, P. H.; Holgate, S. T.; Brusselle, G.; Siafakas, N. M.; Gjomarkaj, M.; Dahlén, S.-E.; Weersink, Els; Gaga, Mina; Papadopoulos, Nikos; Oikonomidou, Erasmia; Zervas, Eleftherios; Contoli, Marco; Pauwels, Romain A.; Joos, Guy F.; de Rudder, Isabelle; Schelfhout, Vanessa; Richter, Kai; Gerding, Daisy; Magnussen, Helgo; Samara, Katerina; Plataki, Maria; Papadopouli, Eva; Szczeklik, Andrzej; Ziolkowska- Graca, Bozena; Kania, Aleksander; Gawlewicz-Mroczka, Agnieszka; Duplaga, Mariusz; Figiel, Ewa; Rabe, Klaus F.; Gauw, Stefanie; van Veen, Ilonka; Kips, Johan C.; Johnston, Sebastian L.; Mallia, Patrick; Campbell, Deborah A.; Robinson, Douglas S.; Kanniess, Frank; Fabbri, Leo M.; Romagnoli, Micaela; Vachier, Isabelle; Devautour, Catherine; Meziane, Lahouari; Vignola, A. Maurizio; Pace, Elisabetta; Profita, Mirella; Wilson, Susan J.; Hewitt, Lorraine; Holoway, John; JM Middelveld, Roelinde; Damm, Katarina; Delin, Ingrid; Eduards, Marianne; Ek, Alexandra; Ekström, Tommy; Gaber, Flora; Gülich, Agneta; James, Anna; Johansson, Lovisa E.; Karlsson, Östen; Kumlin, Maria; Martling, Ingrid; Olsson, Marianne; Skedinger, Maria; Haque, Shushila; Hiemstra, Pieter S.

2014-01-01

Although asthma is characterized by variable airways obstruction, most studies of asthma phenotypes are cross-sectional. The stability of phenotypes defined either by biomarkers or by physiological variables was assessed by repeated measures over 1 year in the Pan-European BIOAIR cohort of adult

15. Defining generic architecture for Cloud IaaS provisioning model

NARCIS (Netherlands)

Demchenko, Y.; de Laat, C.; Mavrin, A.; Leymann, F.; Ivanov, I.; van Sinderen, M.; Shishkov, B.

2011-01-01

Infrastructure as a Service (IaaS) is one of the provisioning models for Clouds as defined in the NIST Clouds definition. Although widely used, current IaaS implementations and solutions doesn’t have common and well defined architecture model. The paper attempts to define a generic architecture for

16. Defining Generic Architecture for Cloud Infrastructure as a Service model

NARCIS (Netherlands)

Demchenko, Y.; de Laat, C.

2011-01-01

Infrastructure as a Service (IaaS) is one of the provisioning models for Clouds as defined in the NIST Clouds definition. Although widely used, current IaaS implementations and solutions doesn’t have common and well defined architecture model. The paper attempts to define a generic architecture for

17. MODELING SUPPLY CHAIN PERFORMANCE VARIABLES

Directory of Open Access Journals (Sweden)

Ashish Agarwal

2005-01-01

Full Text Available In order to understand the dynamic behavior of the variables that can play a major role in the performance improvement in a supply chain, a System Dynamics-based model is proposed. The model provides an effective framework for analyzing different variables affecting supply chain performance. Among different variables, a causal relationship among different variables has been identified. Variables emanating from performance measures such as gaps in customer satisfaction, cost minimization, lead-time reduction, service level improvement and quality improvement have been identified as goal-seeking loops. The proposed System Dynamics-based model analyzes the affect of dynamic behavior of variables for a period of 10 years on performance of case supply chain in auto business.

18. Bayesian modeling of measurement error in predictor variables

NARCIS (Netherlands)

Fox, Gerardus J.A.; Glas, Cornelis A.W.

2003-01-01

It is shown that measurement error in predictor variables can be modeled using item response theory (IRT). The predictor variables, that may be defined at any level of an hierarchical regression model, are treated as latent variables. The normal ogive model is used to describe the relation between

19. Forward and backward dynamics in implicitly defined overlapping generations models

NARCIS (Netherlands)

Gardini, L.; Hommes, C.; Tramontana, F.; de Vilder, R.

2009-01-01

In dynamic economic models derived from optimization principles, the forward equilibrium dynamics may not be uniquely defined, while the backward dynamics is well defined. We derive properties of the global forward equilibrium paths based on properties of the backward dynamics. We propose the

20. DEFINE: A Service-Oriented Dynamically Enabling Function Model

Directory of Open Access Journals (Sweden)

Tan Wei-Yi

2017-01-01

In this paper, we introduce an innovative Dynamically Enable Function In Network Equipment (DEFINE to allow tenant get the network service quickly. First, DEFINE decouples an application into different functional components, and connects these function components in a reconfigurable method. Second, DEFINE provides a programmable interface to the third party, who can develop their own processing modules according to their own needs. To verify the effectiveness of this model, we set up an evaluating network with a FPGA-based OpenFlow switch prototype, and deployed several applications on it. Our results show that DEFINE has excellent flexibility and performance.

1. Concomitant variables in finite mixture models

NARCIS (Netherlands)

Wedel, M

The standard mixture model, the concomitant variable mixture model, the mixture regression model and the concomitant variable mixture regression model all enable simultaneous identification and description of groups of observations. This study reviews the different ways in which dependencies among

2. IN THE MAZE OF E-COMMERCE. ONLINE TRADE DEFINING VARIABLES IN ROMANIA

Directory of Open Access Journals (Sweden)

Erika KULCSÁR

2017-05-01

3. High Variability Is a Defining Component of Mediterranean-Climate Rivers and Their Biota

Directory of Open Access Journals (Sweden)

Núria Cid

2017-01-01

Full Text Available Variability in flow as a result of seasonal precipitation patterns is a defining element of streams and rivers in Mediterranean-climate regions of the world and strongly influences the biota of these unique systems. Mediterranean-climate areas include the Mediterranean Basin and parts of Australia, California, Chile, and South Africa. Mediterranean streams and rivers can experience wet winters and consequent floods to severe droughts, when intermittency in otherwise perennial systems can occur. Inter-annual variation in precipitation can include multi-year droughts or consecutive wet years. Spatial variation in patterns of precipitation (rain vs. snow combined with topographic variability lead to spatial variability in hydrologic patterns that influence populations and communities. Mediterranean streams and rivers are global biodiversity hotspots and are particularly vulnerable to human impacts. Biomonitoring, conservation efforts, and management responses to climate change require approaches that account for spatial and temporal variability (including both intra- and inter-annual. The importance of long-term data sets for understanding and managing these systems highlights the need for sustained and coordinated research efforts in Mediterranean-climate streams and rivers.

4. Synchronization Model for Pulsating Variables

Science.gov (United States)

Takahashi, S.; Morikawa, M.

2013-12-01

A simple model is proposed, which describes the variety of stellar pulsations. In this model, a star is described as an integration of independent elements which interact with each other. This interaction, which may be gravitational or hydrodynamic, promotes the synchronization of elements to yield a coherent mean field pulsation provided some conditions are satisfied. In the case of opacity driven pulsations, the whole star is described as a coupling of many heat engines. In the case of stochastic oscillation, the whole star is described as a coupling of convection cells, interacting through their flow patterns. Convection cells are described by the Lorentz model. In both models, interactions of elements lead to various pulsations, from irregular to regular. The coupled Lorenz model also describes a light curve which shows a semi-regular variability and also shows a low-frequency enhancement proportional to 1/f in its power spectrum. This is in agreement with observations (Kiss et al. 2006). This new modeling method of ‘coupled elements’ may provide a powerful description for a variety of stellar pulsations.

5. Optimization of the Actuarial Model of Defined Contribution Pension Plan

Directory of Open Access Journals (Sweden)

Yan Li

2014-01-01

Full Text Available The paper focuses on the actuarial models of defined contribution pension plan. Through assumptions and calculations, the expected replacement ratios of three different defined contribution pension plans are compared. Specially, more significant considerable factors are put forward in the further cost and risk analyses. In order to get an assessment of current status, the paper finds a relationship between the replacement ratio and the pension investment rate using econometrics method. Based on an appropriate investment rate of 6%, an expected replacement ratio of 20% is reached.

6. Modeling Domain Variability in Requirements Engineering with Contexts

Science.gov (United States)

Lapouchnian, Alexei; Mylopoulos, John

Various characteristics of the problem domain define the context in which the system is to operate and thus impact heavily on its requirements. However, most requirements specifications do not consider contextual properties and few modeling notations explicitly specify how domain variability affects the requirements. In this paper, we propose an approach for using contexts to model domain variability in goal models. We discuss the modeling of contexts, the specification of their effects on system goals, and the analysis of goal models with contextual variability. The approach is illustrated with a case study.

7. User-Defined Material Model for Progressive Failure Analysis

Science.gov (United States)

Knight, Norman F. Jr.; Reeder, James R. (Technical Monitor)

2006-01-01

An overview of different types of composite material system architectures and a brief review of progressive failure material modeling methods used for structural analysis including failure initiation and material degradation are presented. Different failure initiation criteria and material degradation models are described that define progressive failure formulations. These progressive failure formulations are implemented in a user-defined material model (or UMAT) for use with the ABAQUS/Standard1 nonlinear finite element analysis tool. The failure initiation criteria include the maximum stress criteria, maximum strain criteria, the Tsai-Wu failure polynomial, and the Hashin criteria. The material degradation model is based on the ply-discounting approach where the local material constitutive coefficients are degraded. Applications and extensions of the progressive failure analysis material model address two-dimensional plate and shell finite elements and three-dimensional solid finite elements. Implementation details and use of the UMAT subroutine are described in the present paper. Parametric studies for composite structures are discussed to illustrate the features of the progressive failure modeling methods that have been implemented.

8. Variable impact on mortality of AIDS-defining events diagnosed during combination antiretroviral therapy

DEFF Research Database (Denmark)

Mocroft, Amanda; Sterne, Jonathan A C; Egger, Matthias

2009-01-01

BACKGROUND: The extent to which mortality differs following individual acquired immunodeficiency syndrome (AIDS)-defining events (ADEs) has not been assessed among patients initiating combination antiretroviral therapy. METHODS: We analyzed data from 31,620 patients with no prior ADEs who started...... combination antiretroviral therapy. Cox proportional hazards models were used to estimate mortality hazard ratios for each ADE that occurred in >50 patients, after stratification by cohort and adjustment for sex, HIV transmission group, number of antiretroviral drugs initiated, regimen, age, date of starting...... combination antiretroviral therapy, and CD4+ cell count and HIV RNA load at initiation of combination antiretroviral therapy. ADEs that occurred in

9. Variable Selection in Model-based Clustering: A General Variable Role Modeling

OpenAIRE

Maugis, Cathy; Celeux, Gilles; Martin-Magniette, Marie-Laure

2008-01-01

The currently available variable selection procedures in model-based clustering assume that the irrelevant clustering variables are all independent or are all linked with the relevant clustering variables. We propose a more versatile variable selection model which describes three possible roles for each variable: The relevant clustering variables, the irrelevant clustering variables dependent on a part of the relevant clustering variables and the irrelevant clustering variables totally indepe...

10. Inter-operator Variability in Defining Uterine Position Using Three-dimensional Ultrasound Imaging

DEFF Research Database (Denmark)

Baker, Mariwan; Jensen, Jørgen Arendt; Behrens, Claus F.

2013-01-01

to displacement by applied operator-pressure that mimics an actual GYN patient. The transabdominal scanning was performed using a 3D-US system (Clarity® Model 310C00, Elekta, Montreal, Canada). It consists of a US acquisition-station, workstation, and a 128- element 1D array curved probe. The iterated US......-scans were performed in four subsequent sessions (totally 21 US-scans) in a period of four weeks to investigate the randomness of the inter-operator variability. An additionally US-scan was performed as a reference target volume to the consecutive scans. At first, the phantom was marked with ball bearings...

11. An empirical model of decadal ENSO variability

Energy Technology Data Exchange (ETDEWEB)

Kravtsov, S. [University of Wisconsin-Milwaukee, Department of Mathematical Sciences, Atmospheric Sciences Group, P. O. Box 413, Milwaukee, WI (United States)

2012-11-15

12. Modeling the Variable Heliopause Location

Science.gov (United States)

Hensley, Kerry

2018-03-01

In 2012, Voyager 1 zipped across the heliopause. Five and a half years later, Voyager 2 still hasnt followed its twin into interstellar space. Can models of the heliopause location help determine why?How Far to the Heliopause?Artists conception of the heliosphere with the important structures and boundaries labeled. [NASA/Goddard/Walt Feimer]As our solar system travels through the galaxy, the solar outflow pushes against the surrounding interstellar medium, forming a bubble called the heliosphere. The edge of this bubble, the heliopause, is the outermost boundary of our solar system, where the solar wind and the interstellar medium meet. Since the solar outflow is highly variable, the heliopause is constantly moving with the motion driven by changes inthe Sun.NASAs twin Voyager spacecraft were poisedto cross the heliopause after completingtheir tour of the outer planets in the 1980s. In 2012, Voyager 1 registered a sharp increase in the density of interstellar particles, indicating that the spacecraft had passed out of the heliosphere and into the interstellar medium. The slower-moving Voyager 2 was set to pierce the heliopause along a different trajectory, but so far no measurements have shown that the spacecraft has bid farewell to oursolar system.In a recent study, ateam of scientists led by Haruichi Washimi (Kyushu University, Japan and CSPAR, University of Alabama-Huntsville) argues that models of the heliosphere can help explain this behavior. Because the heliopause location is controlled by factors that vary on many spatial and temporal scales, Washimiand collaborators turn to three-dimensional, time-dependent magnetohydrodynamics simulations of the heliosphere. In particular, they investigate how the position of the heliopause along the trajectories of Voyager 1 and Voyager 2 changes over time.Modeled location of the heliopause along the paths of Voyagers 1 (blue) and 2 (orange). Click for a closer look. The red star indicates the location at which Voyager

13. a Variable Resolution Global Spectral Model.

Science.gov (United States)

Hardiker, Vivek Manohar

A conformal transformation suggested by F. Schimdt is followed to implement a global spectral model with variable horizontal resolution. A conformal mapping is defined between the real physical sphere (Earth) to a transformed (Computational) sphere. The model equations are discretized on the computational sphere and the conventional spectral technique is applied to solve the model equations. There are two types of transformations used in the present study, namely, the Stretching transformation and the Rotation of the horizontal grid points. Application of the stretching transformation results in finer resolution along the meridional direction. The stretching is controlled by a parameter C. The rotation transformation can be used to relocate the North Pole of the model to any point on the geographic sphere. The idea is now to rotate the pole to the area of interest and refine the resolution around the new pole by applying the stretching transformation. The stretching transformation can be applied alone without the rotation. A T-42 Spectral Shallow-Water model is transformed by applying the stretching transformation alone as well as the two transformations together. A T-42 conventional Spectral Shallow-Water model is run as the control experiment and a conventional T-85 Spectral Shallow-Water model run is treated as the benchmark (Truth) solution. RMS error analysis for the geopotential field as well as the wind field is performed to evaluate the forecast made by the transformed model. It is observed that the RMS error of the transformed model is lower than that of the control run in a latitude band, for the case of stretching transformation alone, while for the total transformation (rotation followed by stretching), similar results are obtained for a rectangular domain. A multi-level global spectral model is designed from the current FSU global spectral model in order to implement the conformal transformation. The transformed T-85 model is used to study Hurricane

14. Cardinality-dependent Variability in Orthogonal Variability Models

DEFF Research Database (Denmark)

Mærsk-Møller, Hans Martin; Jørgensen, Bo Nørregaard

2012-01-01

During our work on developing and running a software product line for eco-sustainable greenhouse-production software tools, which currently have three products members we have identified a need for extending the notation of the Orthogonal Variability Model (OVM) to support what we refer...

15. Handbook of latent variable and related models

CERN Document Server

Lee, Sik-Yum

2011-01-01

This Handbook covers latent variable models, which are a flexible class of models for modeling multivariate data to explore relationships among observed and latent variables.- Covers a wide class of important models- Models and statistical methods described provide tools for analyzing a wide spectrum of complicated data- Includes illustrative examples with real data sets from business, education, medicine, public health and sociology.- Demonstrates the use of a wide variety of statistical, computational, and mathematical techniques.

16. Generalized latent variable modeling multilevel, longitudinal, and structural equation models

CERN Document Server

Skrondal, Anders; Rabe-Hesketh, Sophia

2004-01-01

This book unifies and extends latent variable models, including multilevel or generalized linear mixed models, longitudinal or panel models, item response or factor models, latent class or finite mixture models, and structural equation models.

17. Spatial scale, means and gradients of hydrographic variables define pelagic seascapes of bluefin and bullet tuna spawning distribution.

Directory of Open Access Journals (Sweden)

Diego Alvarez-Berastegui

Full Text Available Seascape ecology is an emerging discipline focused on understanding how features of the marine habitat influence the spatial distribution of marine species. However, there is still a gap in the development of concepts and techniques for its application in the marine pelagic realm, where there are no clear boundaries delimitating habitats. Here we demonstrate that pelagic seascape metrics defined as a combination of hydrographic variables and their spatial gradients calculated at an appropriate spatial scale, improve our ability to model pelagic fish distribution. We apply the analysis to study the spawning locations of two tuna species: Atlantic bluefin and bullet tuna. These two species represent a gradient in life history strategies. Bluefin tuna has a large body size and is a long-distant migrant, while bullet tuna has a small body size and lives year-round in coastal waters within the Mediterranean Sea. The results show that the models performance incorporating the proposed seascape metrics increases significantly when compared with models that do not consider these metrics. This improvement is more important for Atlantic bluefin, whose spawning ecology is dependent on the local oceanographic scenario, than it is for bullet tuna, which is less influenced by the hydrographic conditions. Our study advances our understanding of how species perceive their habitat and confirms that the spatial scale at which the seascape metrics provide information is related to the spawning ecology and life history strategy of each species.

18. Two-Part Models for Fractional Responses Defined as Ratios of Integers

Directory of Open Access Journals (Sweden)

Harald Oberhofer

2014-09-01

Full Text Available This paper discusses two alternative two-part models for fractional response variables that are defined as ratios of integers. The first two-part model assumes a Binomial distribution and known group size. It nests the one-part fractional response model proposed by Papke and Wooldridge (1996 and, thus, allows one to apply Wald, LM and/or LR tests in order to discriminate between the two models. The second model extends the first one by allowing for overdispersion in the data. We demonstrate the usefulness of the proposed two-part models for data on the 401(k pension plan participation rates used in Papke and Wooldridge (1996.

19. Defining constant versus variable phenotypic features of women with polycystic ovary syndrome using different ethnic groups and populations.

Science.gov (United States)

Welt, C K; Arason, G; Gudmundsson, J A; Adams, J; Palsdóttir, H; Gudlaugsdóttir, G; Ingadóttir, G; Crowley, W F

2006-11-01

The phenotype of women with polycystic ovary syndrome (PCOS) is variable, depending on the ethnic background. The phenotypes of women with PCOS in Iceland and Boston were compared. The study was observational with a parallel design. Subjects were studied in an outpatient setting. Women, aged 18-45 yr, with PCOS defined by hyperandrogenism and fewer than nine menses per year, were examined in Iceland (n = 105) and Boston (n = 262). PCOS subjects underwent a physical exam, fasting blood samples for androgens, gonadotropins, metabolic parameters, and a transvaginal ultrasound. The phenotype of women with PCOS was compared between Caucasian women in Iceland and Boston and among Caucasian, African-American, Hispanic, and Asian women in Boston. Androstenedione (4.0 +/- 1.3 vs. 3.5 +/- 1.2 ng/ml; P testosterone (54.0 +/- 25.7 vs. 66.2 +/- 35.6 ng/dl; P Caucasian Icelandic compared with Boston women with PCOS. There were no differences in fasting blood glucose, insulin, or homeostasis model assessment in body mass index-matched Caucasian subjects from Iceland or Boston or in different ethnic groups in Boston. Polycystic ovary morphology was demonstrated in 93-100% of women with PCOS in all ethnic groups. The data demonstrate differences in the reproductive features of PCOS without differences in glucose and insulin in body mass index-matched populations. These studies also suggest that measuring androstenedione is important for the documentation of hyperandrogenism in Icelandic women. Finally, polycystic ovary morphology by ultrasound is an almost universal finding in women with PCOS as defined by hyperandrogenism and irregular menses.

20. Spatial variability and parametric uncertainty in performance assessment models

International Nuclear Information System (INIS)

Pensado, Osvaldo; Mancillas, James; Painter, Scott; Tomishima, Yasuo

2011-01-01

The problem of defining an appropriate treatment of distribution functions (which could represent spatial variability or parametric uncertainty) is examined based on a generic performance assessment model for a high-level waste repository. The generic model incorporated source term models available in GoldSim ® , the TDRW code for contaminant transport in sparse fracture networks with a complex fracture-matrix interaction process, and a biosphere dose model known as BDOSE TM . Using the GoldSim framework, several Monte Carlo sampling approaches and transport conceptualizations were evaluated to explore the effect of various treatments of spatial variability and parametric uncertainty on dose estimates. Results from a model employing a representative source and ensemble-averaged pathway properties were compared to results from a model allowing for stochastic variation of transport properties along streamline segments (i.e., explicit representation of spatial variability within a Monte Carlo realization). We concluded that the sampling approach and the definition of an ensemble representative do influence consequence estimates. In the examples analyzed in this paper, approaches considering limited variability of a transport resistance parameter along a streamline increased the frequency of fast pathways resulting in relatively high dose estimates, while those allowing for broad variability along streamlines increased the frequency of 'bottlenecks' reducing dose estimates. On this basis, simplified approaches with limited consideration of variability may suffice for intended uses of the performance assessment model, such as evaluation of site safety. (author)

1. Gas permeation measurement under defined humidity via constant volume/variable pressure method

KAUST Repository

Jan Roman, Pauls

2012-02-01

Many industrial gas separations in which membrane processes are feasible entail high water vapour contents, as in CO 2-separation from flue gas in carbon capture and storage (CCS), or in biogas/natural gas processing. Studying the effect of water vapour on gas permeability through polymeric membranes is essential for materials design and optimization of these membrane applications. In particular, for amine-based CO 2 selective facilitated transport membranes, water vapour is necessary for carrier-complex formation (Matsuyama et al., 1996; Deng and Hägg, 2010; Liu et al., 2008; Shishatskiy et al., 2010) [1-4]. But also conventional polymeric membrane materials can vary their permeation behaviour due to water-induced swelling (Potreck, 2009) [5]. Here we describe a simple approach to gas permeability measurement in the presence of water vapour, in the form of a modified constant volume/variable pressure method (pressure increase method). © 2011 Elsevier B.V.

2. Integrating models that depend on variable data

Science.gov (United States)

Banks, A. T.; Hill, M. C.

2016-12-01

Models of human-Earth systems are often developed with the goal of predicting the behavior of one or more dependent variables from multiple independent variables, processes, and parameters. Often dependent variable values range over many orders of magnitude, which complicates evaluation of the fit of the dependent variable values to observations. Many metrics and optimization methods have been proposed to address dependent variable variability, with little consensus being achieved. In this work, we evaluate two such methods: log transformation (based on the dependent variable being log-normally distributed with a constant variance) and error-based weighting (based on a multi-normal distribution with variances that tend to increase as the dependent variable value increases). Error-based weighting has the advantage of encouraging model users to carefully consider data errors, such as measurement and epistemic errors, while log-transformations can be a black box for typical users. Placing the log-transformation into the statistical perspective of error-based weighting has not formerly been considered, to the best of our knowledge. To make the evaluation as clear and reproducible as possible, we use multiple linear regression (MLR). Simulations are conducted with MatLab. The example represents stream transport of nitrogen with up to eight independent variables. The single dependent variable in our example has values that range over 4 orders of magnitude. Results are applicable to any problem for which individual or multiple data types produce a large range of dependent variable values. For this problem, the log transformation produced good model fit, while some formulations of error-based weighting worked poorly. Results support previous suggestions fthat error-based weighting derived from a constant coefficient of variation overemphasizes low values and degrades model fit to high values. Applying larger weights to the high values is inconsistent with the log

3. modelling relationship between rainfall variability and yields

African Journals Online (AJOL)

yield models should be used for planning and forecasting the yield of millet and sorghum in the study area. Key words: modelling, rainfall, yields, millet, sorghum. INTRODUCTION. Meteorological variables, such as rainfall parameters, temperature, sunshine hours, relative humidity, and wind velocity and soil moisture are.

4. Variability in shell models of GRBs

Science.gov (United States)

Sumner, M. C.; Fenimore, E. E.

1997-01-01

Many cosmological models of gamma-ray bursts (GRBs) assume that a single relativistic shell carries kinetic energy away from the source and later converts it into gamma rays, perhaps by interactions with the interstellar medium or by internal shocks within the shell. Although such models are able to reproduce general trends in GRB time histories, it is difficult to reproduce the high degree of variability often seen in GRBs. The authors investigate methods of achieving this variability using a simplified external shock model. Since the model emphasizes geometric and statistical considerations, rather than the detailed physics of the shell, it is applicable to any theory that relies on relativistic shells. They find that the variability in GRBs gives strong clues to the efficiency with which the shell converts its kinetic energy into gamma rays.

5. Mapping and defining sources of variability in bioavailable strontium isotope ratios in the Eastern Mediterranean

Science.gov (United States)

Hartman, Gideon; Richards, Mike

2014-02-01

The relative contributions of bedrock and atmospheric sources to bioavailable strontium (Sr) pools in local soils was studied in Northern Israel and the Golan regions through intensive systematic sampling of modern plants and invertebrates, to produce a map of modern bioavailable strontium isotope ratios (87Sr/86Sr) for regional reconstructions of human and animal mobility patterns. The study investigates sources of variability in bioavailable 87Sr/86Sr ratios, in particular the intra-and inter-site range of variation in plant 87Sr/86Sr ratios, the range of 87Sr/86Sr ratios of plants growing on marine sedimentary versus volcanic geologies, the differences between ligneous and non-ligneous plants with varying growth and water utilization strategies, and the relative contribution of atmospheric Sr sources from different soil and vegetation types and climatic zones. Results indicate predictable variation in 87Sr/86Sr ratios. Inter- and intra-site differences in bioavailable 87Sr/86Sr ratios average of 0.00025, while the range of 87Sr/86Sr ratios measured regionally in plants and invertebrates is 0.7090 in Pleistocene calcareous sandstone and 0.7074 in mid-Pleistocene volcanic pyroclast. The 87Sr/86Sr ratios measured in plants growing on volcanic bedrock show time dependent increases in atmospheric deposition relative to bedrock weathering. The 87Sr/86Sr ratios measured in plants growing on renzina soils depends on precipitation. The spacing between bedrock 87Sr/86Sr ratios and plants is highest in wet conditions and decreases in dry conditions. The 87Sr/86Sr ratios measured in plants growing on terra rossa soils is relatively constant (0.7085) regardless of precipitation. Ligneous plants are typically closer to bedrock 87Sr/86Sr ratios than non-ligneous plants. Since the bioavailable 87Sr/86Sr ratios currently measured in the region reflect a mix of both exogenous and endogenous sources, changes in the relative contribution of exogenous sources can cause variation

6. Conceptual model for assessment of inhalation exposure: Defining modifying factors

NARCIS (Netherlands)

Tielemans, E.; Schneider, T.; Goede, H.; Tischer, M.; Warren, N.; Kromhout, H.; Tongeren, M. van; Hemmen, J. van; Cherrie, J.W.

2008-01-01

The present paper proposes a source-receptor model to schematically describe inhalation exposure to help understand the complex processes leading to inhalation of hazardous substances. The model considers a stepwise transfer of a contaminant from the source to the receptor. The conceptual model is

7. Defining Autism: Variability in State Education Agency Definitions of and Evaluations for Autism Spectrum Disorders

Directory of Open Access Journals (Sweden)

Malinda L. Pennington

2014-01-01

Full Text Available In light of the steady rise in the prevalence of students with autism, this study examined the definition of autism published by state education agencies (SEAs, as well as SEA-indicated evaluation procedures for determining student qualification for autism. We compared components of each SEA definition to aspects of autism from two authoritative sources: Diagnostic and Statistical Manual of Mental Disorders (DSM-IV-TR and Individuals with Disabilities Education Improvement Act (IDEA-2004. We also compared SEA-indicated evaluation procedures across SEAs to evaluation procedures noted in IDEA-2004. Results indicated that many more SEA definitions incorporate IDEA-2004 features than DSM-IV-TR features. However, despite similar foundations, SEA definitions of autism displayed considerable variability. Evaluation procedures were found to vary even more across SEAs. Moreover, within any particular SEA there often was little concordance between the definition (what autism is and evaluation procedures (how autism is recognized. Recommendations for state and federal policy changes are discussed.

8. Parameters and variables appearing in repository design models

International Nuclear Information System (INIS)

Curtis, R.H.; Wart, R.J.

1983-12-01

This report defines the parameters and variables appearing in repository design models and presents typical values and ranges of values of each. Areas covered by this report include thermal, geomechanical, and coupled stress and flow analyses in rock. Particular emphasis is given to conductivity, radiation, and convection parameters for thermal analysis and elastic constants, failure criteria, creep laws, and joint properties for geomechanical analysis. The data in this report were compiled to help guide the selection of values of parameters and variables to be used in code benchmarking. 102 references, 33 figures, 51 tables

9. Gait variability: methods, modeling and meaning

Directory of Open Access Journals (Sweden)

Hausdorff Jeffrey M

2005-07-01

Full Text Available Abstract The study of gait variability, the stride-to-stride fluctuations in walking, offers a complementary way of quantifying locomotion and its changes with aging and disease as well as a means of monitoring the effects of therapeutic interventions and rehabilitation. Previous work has suggested that measures of gait variability may be more closely related to falls, a serious consequence of many gait disorders, than are measures based on the mean values of other walking parameters. The Current JNER series presents nine reports on the results of recent investigations into gait variability. One novel method for collecting unconstrained, ambulatory data is reviewed, and a primer on analysis methods is presented along with a heuristic approach to summarizing variability measures. In addition, the first studies of gait variability in animal models of neurodegenerative disease are described, as is a mathematical model of human walking that characterizes certain complex (multifractal features of the motor control's pattern generator. Another investigation demonstrates that, whereas both healthy older controls and patients with a higher-level gait disorder walk more slowly in reduced lighting, only the latter's stride variability increases. Studies of the effects of dual tasks suggest that the regulation of the stride-to-stride fluctuations in stride width and stride time may be influenced by attention loading and may require cognitive input. Finally, a report of gait variability in over 500 subjects, probably the largest study of this kind, suggests how step width variability may relate to fall risk. Together, these studies provide new insights into the factors that regulate the stride-to-stride fluctuations in walking and pave the way for expanded research into the control of gait and the practical application of measures of gait variability in the clinical setting.

10. Nutritional models for space travel from chemically defined diets

Science.gov (United States)

Dufour, P. A.

1984-01-01

Human nutritional requirements are summarized, including recommended daily intake and maximum safe chronic intake of nutrients. The biomedical literature on various types of chemically defined diets (CDD's), which are liquid, formulated diets for enteral and total parenteral nutrition, is reviewed. The chemical forms of the nutrients in CDD's are detailed, and the compositions and sources of representative commercial CDD's are tabulated. Reported effects of CDD's in medical patients, healthy volunteers, and laboratory animals are discussed. The effects include gastrointestinal side effects, metabolic imbalances, nutrient deficiencies and excesses, and psychological problems. Dietary factors contributing to the side effects are examined. Certain human nutrient requirements have been specified more precisely as a result of long-term use of CDD's, and related studies are included. CDD's are the most restricted yet nutritionally complete diets available.

11. Gaussian mixture model of heart rate variability.

Directory of Open Access Journals (Sweden)

Tommaso Costa

Full Text Available Heart rate variability (HRV is an important measure of sympathetic and parasympathetic functions of the autonomic nervous system and a key indicator of cardiovascular condition. This paper proposes a novel method to investigate HRV, namely by modelling it as a linear combination of Gaussians. Results show that three Gaussians are enough to describe the stationary statistics of heart variability and to provide a straightforward interpretation of the HRV power spectrum. Comparisons have been made also with synthetic data generated from different physiologically based models showing the plausibility of the Gaussian mixture parameters.

12. Defining and implementing a model for pharmacy resident research projects

Directory of Open Access Journals (Sweden)

Dick TB

2015-09-01

Full Text Available Objective: To describe a standard approach to provide a support structure for pharmacy resident research that emphasizes self-identification of a residency research project. Methods: A subcommittee of the residency advisory committee was formed at our institution. The committee was initially comprised of 2 clinical pharmacy specialists, 1 drug information pharmacist, and 2 pharmacy administrators. The committee developed research guidelines that are distributed to residents prior to the residency start that detail the research process, important deadlines, and available resources. Instructions for institutional review board (IRB training and deadlines for various assignments and presentations throughout the residency year are clearly defined. Residents conceive their own research project and emphasis is placed on completing assignments early in the residency year. Results: In the 4 years this research process has been in place, 15 of 16 (94% residents successfully identified their own research question. All 15 residents submitted a complete research protocol to the IRB by the August deadline. Four residents have presented the results of their research at multi-disciplinary national professional meetings and 1 has published a manuscript. Feedback from outgoing residents has been positive overall and their perceptions of their research projects and the process are positive. Conclusion: Pharmacy residents selecting their own research projects for their residency year is a feasible alternative to assigning or providing lists of research projects from which to select a project.

13. Racing to define pharmaceutical R&D external innovation models.

Science.gov (United States)

Wang, Liangsu; Plump, Andrew; Ringel, Michael

2015-03-01

The pharmaceutical industry continues to face fundamental challenges because of issues with research and development (R&D) productivity and rising customer expectations. To lower R&D costs, move beyond me-too therapies, and create more transformative portfolios, pharmaceutical companies are actively capitalizing on external innovation through precompetitive collaboration with academia, cultivation of biotech start-ups, and proactive licensing and acquisitions. Here, we review the varying innovation strategies used by pharmaceutical companies, compare and contrast these models, and identify the trends in external innovation. We also discuss factors that influence these external innovation models and propose a preliminary set of metrics that could be used as leading indicators of success. Copyright © 2014 Elsevier Ltd. All rights reserved.

14. Confounding of three binary-variables counterfactual model

OpenAIRE

Liu, Jingwei; Hu, Shuang

2011-01-01

Confounding of three binary-variables counterfactual model is discussed in this paper. According to the effect between the control variable and the covariate variable, we investigate three counterfactual models: the control variable is independent of the covariate variable, the control variable has the effect on the covariate variable and the covariate variable affects the control variable. Using the ancillary information based on conditional independence hypotheses, the sufficient conditions...

15. DISSECTING MAGNETAR VARIABILITY WITH BAYESIAN HIERARCHICAL MODELS

Energy Technology Data Exchange (ETDEWEB)

Huppenkothen, Daniela; Elenbaas, Chris; Watts, Anna L.; Horst, Alexander J. van der [Anton Pannekoek Institute for Astronomy, University of Amsterdam, Postbus 94249, 1090 GE Amsterdam (Netherlands); Brewer, Brendon J. [Department of Statistics, The University of Auckland, Private Bag 92019, Auckland 1142 (New Zealand); Hogg, David W. [Center for Data Science, New York University, 726 Broadway, 7th Floor, New York, NY 10003 (United States); Murray, Iain [School of Informatics, University of Edinburgh, Edinburgh EH8 9AB (United Kingdom); Frean, Marcus [School of Engineering and Computer Science, Victoria University of Wellington (New Zealand); Levin, Yuri [Monash Center for Astrophysics and School of Physics, Monash University, Clayton, Victoria 3800 (Australia); Kouveliotou, Chryssa, E-mail: daniela.huppenkothen@nyu.edu [Astrophysics Office, ZP 12, NASA/Marshall Space Flight Center, Huntsville, AL 35812 (United States)

2015-09-01

Neutron stars are a prime laboratory for testing physical processes under conditions of strong gravity, high density, and extreme magnetic fields. Among the zoo of neutron star phenomena, magnetars stand out for their bursting behavior, ranging from extremely bright, rare giant flares to numerous, less energetic recurrent bursts. The exact trigger and emission mechanisms for these bursts are not known; favored models involve either a crust fracture and subsequent energy release into the magnetosphere, or explosive reconnection of magnetic field lines. In the absence of a predictive model, understanding the physical processes responsible for magnetar burst variability is difficult. Here, we develop an empirical model that decomposes magnetar bursts into a superposition of small spike-like features with a simple functional form, where the number of model components is itself part of the inference problem. The cascades of spikes that we model might be formed by avalanches of reconnection, or crust rupture aftershocks. Using Markov Chain Monte Carlo sampling augmented with reversible jumps between models with different numbers of parameters, we characterize the posterior distributions of the model parameters and the number of components per burst. We relate these model parameters to physical quantities in the system, and show for the first time that the variability within a burst does not conform to predictions from ideas of self-organized criticality. We also examine how well the properties of the spikes fit the predictions of simplified cascade models for the different trigger mechanisms.

16. DEFINING AND CONSTRUCTING THE TEACHING MODEL OF ENTREPRENEUR EDUCATION BASED ON ENTREPRENEURIAL INTENTION MODEL

Directory of Open Access Journals (Sweden)

2005-01-01

Full Text Available Concept of entrepreneurship has been widely debated whether to be an entrepreneur one need to get formal entrepreneurial education or not. Most of the formal entrepreneur education yield the same flaw, which is the lack of teaching soft skill and building the necessary entrepreneurship characteristics. Intention-based models of entrepreneurship education try to fill the gap by focusing the education on the human intention of becoming entrepreneur by defining four model of entrepreneurship education. An empirical research is conducted to show simple application on defining and understanding the model where the result could be used for giving some insight on constructing the appropriate model for entrepreneurship education in the future.

17. DEFINING AND CONSTRUCTING THE TEACHING MODEL OF ENTREPRENEUR EDUCATION BASED ON ENTREPRENEURIAL INTENTION MODEL

OpenAIRE

2005-01-01

Concept of entrepreneurship has been widely debated whether to be an entrepreneur one need to get formal entrepreneurial education or not. Most of the formal entrepreneur education yield the same flaw, which is the lack of teaching soft skill and building the necessary entrepreneurship characteristics. Intention-based models of entrepreneurship education try to fill the gap by focusing the education on the human intention of becoming entrepreneur by defining four model of entrepreneurship edu...

18. Natural climate variability in a coupled model

International Nuclear Information System (INIS)

Zebiak, S.E.; Cane, M.A.

1990-01-01

Multi-century simulations with a simplified coupled ocean-atmosphere model are described. These simulations reveal an impressive range of variability on decadal and longer time scales, in addition to the dominant interannual el Nino/Southern Oscillation signal that the model originally was designed to simulate. Based on a very large sample of century-long simulations, it is nonetheless possible to identify distinct model parameter sensitivities that are described here in terms of selected indices. Preliminary experiments motivated by general circulation model results for increasing greenhouse gases suggest a definite sensitivity to model global warming. While these results are not definitive, they strongly suggest that coupled air-sea dynamics figure prominently in global change and must be included in models for reliable predictions

19. Pre-quantum mechanics. Introduction to models with hidden variables

International Nuclear Information System (INIS)

Grea, J.

1976-01-01

Within the context of formalism of hidden variable type, the author considers the models used to describe mechanical systems before the introduction of the quantum model. An account is given of the characteristics of the theoretical models and their relationships with experimental methodology. The models of analytical, pre-ergodic, stochastic and thermodynamic mechanics are studied in succession. At each stage the physical hypothesis is enunciated by postulate corresponding to the type of description of the reality of the model. Starting from this postulate, the physical propositions which are meaningful for the model under consideration are defined and their logical structure is indicated. It is then found that on passing from one level of description to another, one can obtain successively Boolean lattices embedded in lattices of continuous geometric type, which are themselves embedded in Boolean lattices. It is therefore possible to envisage a more detailed description than that given by the quantum lattice and to construct it by analogy. (Auth.)

20. Model for defining the level of implementation of the management functions in small enterprises

Directory of Open Access Journals (Sweden)

Dragan Mišetić

2001-01-01

Full Text Available Small enterprises, based on private ownership and entrepreneurial capability, represent, for the majority of the scientific and professional public, the prime movers of economic growth, both in developed market economies and in the economies of countries in transition. At the same time, various studies show that the main reason for the bankruptcy of many small enterprises (more than 90% can be found in weak management, i.e. unacquaintance with management functions (planning, organization, human resources management, leading and control and with the need of implementing those functions in practice. Although it is not easy to define the ingredients of the recipe for success or to define precisely the importance of different elements, and regardless of the fact that many authors think that the management theory for large enterprises is inapplicable for the small ones, we all agree that the owner/manager and his implementation of the management theory has a decisive influence on small enterprises in modern economic circumstances. Therefore, the author of this work is hereby representing the model, which defines the level of implementation of management functions in small enterprises, as well as three systems/levels (danger, risk, progress in which small enterprises may find themselves. After the level of implementation of the management function is identified, it is possible to undertake some corrective actions, which will remove the found failures. While choosing the variables of the model, the author took into consideration specific features of a small enterprise, as well as specific features of its owner/manager.

1. Variable impact on mortality of AIDS-defining events diagnosed during combination antiretroviral therapy: not all AIDS-defining conditions are created equal

NARCIS (Netherlands)

2009-01-01

BACKGROUND: The extent to which mortality differs following individual acquired immunodeficiency syndrome (AIDS)-defining events (ADEs) has not been assessed among patients initiating combination antiretroviral therapy. METHODS: We analyzed data from 31,620 patients with no prior ADEs who started

2. Multimodal Similarity Gaussian Process Latent Variable Model.

Science.gov (United States)

Song, Guoli; Wang, Shuhui; Huang, Qingming; Tian, Qi

2017-09-01

Data from real applications involve multiple modalities representing content with the same semantics from complementary aspects. However, relations among heterogeneous modalities are simply treated as observation-to-fit by existing work, and the parameterized modality specific mapping functions lack flexibility in directly adapting to the content divergence and semantic complicacy in multimodal data. In this paper, we build our work based on the Gaussian process latent variable model (GPLVM) to learn the non-parametric mapping functions and transform heterogeneous modalities into a shared latent space. We propose multimodal Similarity Gaussian Process latent variable model (m-SimGP), which learns the mapping functions between the intra-modal similarities and latent representation. We further propose multimodal distance-preserved similarity GPLVM (m-DSimGP) to preserve the intra-modal global similarity structure, and multimodal regularized similarity GPLVM (m-RSimGP) by encouraging similar/dissimilar points to be similar/dissimilar in the latent space. We propose m-DRSimGP, which combines the distance preservation in m-DSimGP and semantic preservation in m-RSimGP to learn the latent representation. The overall objective functions of the four models are solved by simple and scalable gradient decent techniques. They can be applied to various tasks to discover the nonlinear correlations and to obtain the comparable low-dimensional representation for heterogeneous modalities. On five widely used real-world data sets, our approaches outperform existing models on cross-modal content retrieval and multimodal classification.

3. Change in intraindividual variability over time as a key metric for defining performance-based cognitive fatigability.

Science.gov (United States)

Wang, Chao; Ding, Mingzhou; Kluger, Benzi M

2014-03-01

Cognitive fatigability is conventionally quantified as the increase over time in either mean reaction time (RT) or error rate from two or more time periods during sustained performance of a prolonged cognitive task. There is evidence indicating that these mean performance measures may not sufficiently reflect the response characteristics of cognitive fatigue. We hypothesized that changes in intraindividual variability over time would be a more sensitive and ecologically meaningful metric for investigations of fatigability of cognitive performance. To test the hypothesis fifteen young adults were recruited. Trait fatigue perceptions in various domains were assessed with the Multidimensional Fatigue Index (MFI). Behavioral data were then recorded during performance of a three-hour continuous cued Stroop task. Results showed that intraindividual variability, as quantified by the coefficient of variation of RT, increased linearly over the course of three hours and demonstrated a significantly greater effect size than mean RT or accuracy. Change in intraindividual RT variability over time was significantly correlated with relevant subscores of the MFI including reduced activity, reduced motivation and mental fatigue. While change in mean RT over time was also correlated with reduced motivation and mental fatigue, these correlations were significantly smaller than those associated with intraindividual RT variability. RT distribution analysis using an ex-Gaussian model further revealed that change in intraindividual variability over time reflects an increase in the exponential component of variance and may reflect attentional lapses or other breakdowns in cognitive control. These results suggest that intraindividual variability and its change over time provide important metrics for measuring cognitive fatigability and may prove useful for inferring the underlying neuronal mechanisms of both perceptions of fatigue and objective changes in performance. Copyright © 2014

4. Modeling of a 3DTV service in the software-defined networking architecture

Science.gov (United States)

Wilczewski, Grzegorz

2014-11-01

In this article a newly developed concept towards modeling of a multimedia service offering stereoscopic motion imagery is presented. Proposed model is based on the approach of utilization of Software-defined Networking or Software Defined Networks architecture (SDN). The definition of 3D television service spanning SDN concept is identified, exposing basic characteristic of a 3DTV service in a modern networking organization layout. Furthermore, exemplary functionalities of the proposed 3DTV model are depicted. It is indicated that modeling of a 3DTV service in the Software-defined Networking architecture leads to multiplicity of improvements, especially towards flexibility of a service supporting heterogeneity of end user devices.

5. Defining an Abrasion Index for Lunar Surface Systems as a Function of Dust Interaction Modes and Variable Concentration Zones

Science.gov (United States)

Kobrick, Ryan L.; Klaus, David M.; Street, Kenneth W., Jr.

2010-01-01

Unexpected issues were encountered during the Apollo era of lunar exploration due to detrimental abrasion of materials upon exposure to the fine-grained, irregular shaped dust on the surface of the Moon. For critical design features involving contact with the lunar surface and for astronaut safety concerns, operational concepts and dust tolerance must be considered in the early phases of mission planning. To systematically define material selection criteria, dust interaction can be characterized by two-body or three-body abrasion testing, and subcategorically by physical interactions of compression, rolling, sliding and bending representing specific applications within the system. Two-body abrasion occurs when a single particle or asperity slides across a given surface removing or displacing material. Three-body abrasion occurs when multiple particles interact with a solid surface, or in between two surfaces, allowing the abrasives to freely rotate and interact with the material(s), leading to removal or displacement of mass. Different modes of interaction are described in this paper along with corresponding types of tests that can be utilized to evaluate each configuration. In addition to differential modes of abrasion, variable concentrations of dust in different zones can also be considered for a given system design and operational protocol. These zones include: (1) outside the habitat where extensive dust exposure occurs, (2) in a transitional zone such as an airlock or suitport, and (3) inside the habitat or spacesuit with a low particle count. These zones can be used to help define dust interaction frequencies, and corresponding risks to the systems and/or crew can be addressed by appropriate mitigation strategies. An abrasion index is introduced that includes the level of risk, R, the hardness of the mineralogy, H, the severity of the abrasion mode, S, and the frequency of particle interactions, F.

6. Use of multivariate factor analysis to define new indicator variables for milk composition and coagulation properties in Brown Swiss cows.

Science.gov (United States)

Macciotta, N P P; Cecchinato, A; Mele, M; Bittante, G

2012-12-01

The aim of this study was to elucidate the structure of relationships between milk yield, composition, and coagulation properties of Brown Swiss cattle. Multivariate factor analysis was used to derive new synthetic variables that can be used for selection purposes. For this reason, genetic parameters of these new variables were estimated. Individual records on milk yield, fat and protein percentages, casein content, lactose percentage, somatic cell count, titratable acidity, and pH were taken on 1,200 Italian Brown Swiss cows located in 38 herds. Factor analysis was able to extract 4 latent variables with an associated communality equal to 70% of the total original variance. The 4 latent factors were interpreted as indicators of milk composition, coagulation, acidity, and mammary gland health, respectively. Factor scores calculated for each animal exhibited coherent patterns along the lactation and across different parities. Estimation of genetic parameters of factor scores carried out with a multiple-trait Bayesian hierarchical model showed moderate to low heritabilities (raging from 0.10 to 0.23) and genetic correlations (from -0.15 to 0.46). Results of the present study support the hypothesis of a simpler structure that controls, at least in part, the covariance of milk composition and coagulation properties. Moreover, extracted variables may be useful for both breeding and management purposes, being able to represent, with a single value for each animal, complex traits such as milk coagulation properties or health status of the mammary gland. Copyright © 2012 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

7. Modeling variability in porescale multiphase flow experiments

Energy Technology Data Exchange (ETDEWEB)

Ling, Bowen; Bao, Jie; Oostrom, Mart; Battiato, Ilenia; Tartakovsky, Alexandre M.

2017-07-01

Microfluidic devices and porescale numerical models are commonly used to study multiphase flow in biological, geological, and engineered porous materials. In this work, we perform a set of drainage and imbibition experiments in six identical microfluidic cells to study the reproducibility of multiphase flow experiments. We observe significant variations in the experimental results, which are smaller during the drainage stage and larger during the imbibition stage. We demonstrate that these variations are due to sub-porescale geometry differences in microcells (because of manufacturing defects) and variations in the boundary condition (i.e.,fluctuations in the injection rate inherent to syringe pumps). Computational simulations are conducted using commercial software STAR-CCM+, both with constant and randomly varying injection rate. Stochastic simulations are able to capture variability in the experiments associated with the varying pump injection rate.

8. Modeling variability in porescale multiphase flow experiments

Science.gov (United States)

Ling, Bowen; Bao, Jie; Oostrom, Mart; Battiato, Ilenia; Tartakovsky, Alexandre M.

2017-07-01

Microfluidic devices and porescale numerical models are commonly used to study multiphase flow in biological, geological, and engineered porous materials. In this work, we perform a set of drainage and imbibition experiments in six identical microfluidic cells to study the reproducibility of multiphase flow experiments. We observe significant variations in the experimental results, which are smaller during the drainage stage and larger during the imbibition stage. We demonstrate that these variations are due to sub-porescale geometry differences in microcells (because of manufacturing defects) and variations in the boundary condition (i.e., fluctuations in the injection rate inherent to syringe pumps). Computational simulations are conducted using commercial software STAR-CCM+, both with constant and randomly varying injection rates. Stochastic simulations are able to capture variability in the experiments associated with the varying pump injection rate.

9. Variable selection for modelling effects of eutrophication on stream and river ecosystems

NARCIS (Netherlands)

Nijboer, R.C.; Verdonschot, P.F.M.

2004-01-01

Models are needed for forecasting the effects of eutrophication on stream and river ecosystems. Most of the current models do not include differences in local stream characteristics and effects on the biota. To define the most important variables that should be used in a stream eutrophication model,

10. A stepwise approach for defining the applicability domain of SAR and QSAR models

DEFF Research Database (Denmark)

Dimitrov, Sabcho; Dimitrova, Gergana; Pavlov, Todor

2005-01-01

parametric requirements are imposed in the first stage, specifying in the domain only those chemicals that fall in the range of variation of the physicochemical properties of the chemicals in the training set. The second stage defines the structural similarity between chemicals that are correctly predicted...... by the model. The structural neighborhood of atom-centered fragments is used to determine this similarity. The third stage in defining the domain is based on a mechanistic understanding of the modeled phenomenon. Here, the model domain combines the reliability of specific reactive groups hypothesized to cause......, if metabolic activation of chemicals is a part of the (Q)SAR model. Some of the stages of the proposed approach for defining the model domain can be eliminated depending on the availability and quality of the experimental data used to derive the model, the specificity of (Q)SARs, and the goals...

11. On the Use of Variability Operations in the V-Modell XT Software Process Line

DEFF Research Database (Denmark)

Kuhrmann, Marco; Méndez Fernández, Daniel; Ternité, Thomas

2016-01-01

process assets. Variability operations are an instrument to realize flexibility by explicitly declaring required modifications, which are applied to create a procedurally generated company-specific process. However, little is known about which variability operations are suitable in practice......Software process lines provide a systematic approach to develop and manage software processes. It defines a reference process containing general process assets, whereas a well-defined customization approach allows process engineers to create new process variants, e.g., by extending or modifying...... as an improvement proposal for other process models. Our findings show that 69 variability operation types are defined across several metamodel versions of which, however, 25 remain unused. The found variability operations allow for systematically modifying the content of process model elements and the process...

12. Modelling of Uncertainty and Bi-Variable Maps

Science.gov (United States)

Nánásiová, Ol'ga; Pykacz, Jarosław

2016-05-01

The paper gives an overview and compares various bi-varilable maps from orthomodular lattices into unit interval. It focuses mainly on such bi-variable maps that may be used for constructing joint probability distributions for random variables which are not defined on the same Boolean algebra.

13. A model of cloud application assignments in software-defined storages

Science.gov (United States)

Bolodurina, Irina P.; Parfenov, Denis I.; Polezhaev, Petr N.; E Shukhman, Alexander

2017-01-01

The aim of this study is to analyze the structure and mechanisms of interaction of typical cloud applications and to suggest the approaches to optimize their placement in storage systems. In this paper, we describe a generalized model of cloud applications including the three basic layers: a model of application, a model of service, and a model of resource. The distinctive feature of the model suggested implies analyzing cloud resources from the user point of view and from the point of view of a software-defined infrastructure of the virtual data center (DC). The innovation character of this model is in describing at the same time the application data placements, as well as the state of the virtual environment, taking into account the network topology. The model of software-defined storage has been developed as a submodel within the resource model. This model allows implementing the algorithm for control of cloud application assignments in software-defined storages. Experimental researches returned this algorithm decreases in cloud application response time and performance growth in user request processes. The use of software-defined data storages allows the decrease in the number of physical store devices, which demonstrates the efficiency of our algorithm.

14. Conservation priorities for Prunus africana defined with the aid of spatial analysis of genetic data and climatic variables.

Science.gov (United States)

Vinceti, Barbara; Loo, Judy; Gaisberger, Hannes; van Zonneveld, Maarten J; Schueler, Silvio; Konrad, Heino; Kadu, Caroline A C; Geburek, Thomas

2013-01-01

Conservation priorities for Prunus africana, a tree species found across Afromontane regions, which is of great commercial interest internationally and of local value for rural communities, were defined with the aid of spatial analyses applied to a set of georeferenced molecular marker data (chloroplast and nuclear microsatellites) from 32 populations in 9 African countries. Two approaches for the selection of priority populations for conservation were used, differing in the way they optimize representation of intra-specific diversity of P. africana across a minimum number of populations. The first method (S1) was aimed at maximizing genetic diversity of the conservation units and their distinctiveness with regard to climatic conditions, the second method (S2) at optimizing representativeness of the genetic diversity found throughout the species' range. Populations in East African countries (especially Kenya and Tanzania) were found to be of great conservation value, as suggested by previous findings. These populations are complemented by those in Madagascar and Cameroon. The combination of the two methods for prioritization led to the identification of a set of 6 priority populations. The potential distribution of P. africana was then modeled based on a dataset of 1,500 georeferenced observations. This enabled an assessment of whether the priority populations identified are exposed to threats from agricultural expansion and climate change, and whether they are located within the boundaries of protected areas. The range of the species has been affected by past climate change and the modeled distribution of P. africana indicates that the species is likely to be negatively affected in future, with an expected decrease in distribution by 2050. Based on these insights, further research at the regional and national scale is recommended, in order to strengthen P. africana conservation efforts.

15. Conservation priorities for Prunus africana defined with the aid of spatial analysis of genetic data and climatic variables.

Directory of Open Access Journals (Sweden)

Barbara Vinceti

Full Text Available Conservation priorities for Prunus africana, a tree species found across Afromontane regions, which is of great commercial interest internationally and of local value for rural communities, were defined with the aid of spatial analyses applied to a set of georeferenced molecular marker data (chloroplast and nuclear microsatellites from 32 populations in 9 African countries. Two approaches for the selection of priority populations for conservation were used, differing in the way they optimize representation of intra-specific diversity of P. africana across a minimum number of populations. The first method (S1 was aimed at maximizing genetic diversity of the conservation units and their distinctiveness with regard to climatic conditions, the second method (S2 at optimizing representativeness of the genetic diversity found throughout the species' range. Populations in East African countries (especially Kenya and Tanzania were found to be of great conservation value, as suggested by previous findings. These populations are complemented by those in Madagascar and Cameroon. The combination of the two methods for prioritization led to the identification of a set of 6 priority populations. The potential distribution of P. africana was then modeled based on a dataset of 1,500 georeferenced observations. This enabled an assessment of whether the priority populations identified are exposed to threats from agricultural expansion and climate change, and whether they are located within the boundaries of protected areas. The range of the species has been affected by past climate change and the modeled distribution of P. africana indicates that the species is likely to be negatively affected in future, with an expected decrease in distribution by 2050. Based on these insights, further research at the regional and national scale is recommended, in order to strengthen P. africana conservation efforts.

16. Structural Modeling of Institutional Variables and Undergraduates ...

African Journals Online (AJOL)

Peer influence and facilities for research were the major exogenous variables while students' perception of their supervisors' commitment to research supervision was a critical variable that influences their attitude towards research projects. We suggest that research supervisors be firm and discreet in the supervision and ...

17. Effect of Flux Adjustments on Temperature Variability in Climate Models

International Nuclear Information System (INIS)

Duffy, P.; Bell, J.; Covey, C.; Sloan, L.

1999-01-01

It has been suggested that ''flux adjustments'' in climate models suppress simulated temperature variability. If true, this might invalidate the conclusion that at least some of observed temperature increases since 1860 are anthropogenic, since this conclusion is based in part on estimates of natural temperature variability derived from flux-adjusted models. We assess variability of surface air temperatures in 17 simulations of internal temperature variability submitted to the Coupled Model Intercomparison Project. By comparing variability in flux-adjusted vs. non-flux adjusted simulations, we find no evidence that flux adjustments suppress temperature variability in climate models; other, largely unknown, factors are much more important in determining simulated temperature variability. Therefore the conclusion that at least some of observed temperature increases are anthropogenic cannot be questioned on the grounds that it is based in part on results of flux-adjusted models. Also, reducing or eliminating flux adjustments would probably do little to improve simulations of temperature variability

18. Incorporating additional tree and environmental variables in a lodgepole pine stem profile model

Science.gov (United States)

John C. Byrne

1993-01-01

A new variable-form segmented stem profile model is developed for lodgepole pine (Pinus contorta) trees from the northern Rocky Mountains of the United States. I improved estimates of stem diameter by predicting two of the model coefficients with linear equations using a measure of tree form, defined as a ratio of dbh and total height. Additional improvements were...

19. Variability in prostate and seminal vesicle delineations defined on magnetic resonance images, a multi-observer, -center and -sequence study

DEFF Research Database (Denmark)

Nyholm, Tufve; Jonsson, Joakim; Söderström, Karin

2013-01-01

.e. the MR sequence used to acquire the data. RESULTS: The intra-physician variability in different directions was between 1.3 - 1.9 mm and 3 -- 4 mm for the prostate and seminal vesicles respectively (1 std). The inter-physician variability for different directions were between 0.7 -- 1.7 mm...

20. Modeling sea-surface temperature and its variability

Science.gov (United States)

Sarachik, E. S.

1985-01-01

A brief review is presented of the temporal scales of sea surface temperature variability. Progress in modeling sea surface temperature, and remaining obstacles to the understanding of the variability is discussed.

1. Evaluation of Brace Treatment for Infant Hip Dislocation in a Prospective Cohort: Defining the Success Rate and Variables Associated with Failure.

Science.gov (United States)

Upasani, Vidyadhar V; Bomar, James D; Matheney, Travis H; Sankar, Wudbhav N; Mulpuri, Kishore; Price, Charles T; Moseley, Colin F; Kelley, Simon P; Narayanan, Unni; Clarke, Nicholas M P; Wedge, John H; Castañeda, Pablo; Kasser, James R; Foster, Bruce K; Herrera-Soto, Jose A; Cundy, Peter J; Williams, Nicole; Mubarak, Scott J

2016-07-20

The use of a brace has been shown to be an effective treatment for hip dislocation in infants; however, previous studies of such treatment have been single-center or retrospective. The purpose of the current study was to evaluate the success rate for brace use in the treatment of infant hip dislocation in an international, multicenter, prospective cohort, and to identify the variables associated with brace failure. All dislocations were verified with use of ultrasound or radiography prior to the initiation of treatment, and patients were followed prospectively for a minimum of 18 months. Successful treatment was defined as the use of a brace that resulted in a clinically and radiographically reduced hip, without surgical intervention. The Mann-Whitney test, chi-square analysis, and Fisher exact test were used to identify risk factors for brace failure. A multivariate logistic regression model was used to determine the probability of brace failure according to the risk factors identified. Brace treatment was successful in 162 (79%) of the 204 dislocated hips in this series. Six variables were found to be significant risk factors for failure: developing femoral nerve palsy during brace treatment (p = 0.001), treatment with a static brace (p failure, whereas hips with 4 or 5 risk factors had a 100% probability of failure. These data provide valuable information for patient families and their providers regarding the important variables that influence successful brace treatment for dislocated hips in infants. Prognostic Level I. See Instructions for Authors for a complete description of levels of evidence. Copyright © 2016 by The Journal of Bone and Joint Surgery, Incorporated.

2. User-defined Material Model for Thermo-mechanical Progressive Failure Analysis

Science.gov (United States)

Knight, Norman F., Jr.

2008-01-01

Previously a user-defined material model for orthotropic bimodulus materials was developed for linear and nonlinear stress analysis of composite structures using either shell or solid finite elements within a nonlinear finite element analysis tool. Extensions of this user-defined material model to thermo-mechanical progressive failure analysis are described, and the required input data are documented. The extensions include providing for temperature-dependent material properties, archival of the elastic strains, and a thermal strain calculation for materials exhibiting a stress-free temperature.

3. Generalized Network Psychometrics : Combining Network and Latent Variable Models

NARCIS (Netherlands)

Epskamp, S.; Rhemtulla, M.; Borsboom, D.

2017-01-01

We introduce the network model as a formal psychometric model, conceptualizing the covariance between psychometric indicators as resulting from pairwise interactions between observable variables in a network structure. This contrasts with standard psychometric models, in which the covariance between

4. The SRCR/SID region of DMBT1 defines a complex multi-allele system representing the major basis for its variability in cancer

DEFF Research Database (Denmark)

Mollenhauer, Jan; Müller, Hanna; Kollender, Gaby

2002-01-01

seven distinct DMBT1 alleles based on variable numbers of tandem repeats (VNTRs). At least 11 tumors exclusively harbored these VNTRs. The data suggest that the SRCR/SID region defines a complex multi-allele system that has escaped previous analyses and that represents the major basis...

5. Predictor variable resolution governs modeled soil types

Science.gov (United States)

Soil mapping identifies different soil types by compressing a unique suite of spatial patterns and processes across multiple spatial scales. It can be quite difficult to quantify spatial patterns of soil properties with remotely sensed predictor variables. More specifically, matching the right scale...

6. Model Driven Development of Simulation Models : Defining and Transforming Conceptual Models into Simulation Models by Using Metamodels and Model Transformations

NARCIS (Netherlands)

Küçükkeçeci Çetinkaya, D.

2013-01-01

Modeling and simulation (M&S) is an effective method for analyzing and designing systems and it is of interest to scientists and engineers from all disciplines. This thesis proposes the application of a model driven software development approach throughout the whole set of M&S activities and it

7. Pharmacokinetic models for propofol-defining and illuminating the devil in the detail

NARCIS (Netherlands)

Absalom, A. R.; Mani, V.; De Smet, T.; Struys, M. M. R. F.

The recently introduced open-target-controlled infusion (TCI) systems can be programmed with any pharmacokinetic model, and allow either plasma- or effect-site targeting. With effect-site targeting the goal is to achieve a user-defined target effect-site concentration as rapidly as possible, by

8. Practical methods to define scattering coefficients in a room acoustics computer model

DEFF Research Database (Denmark)

Zeng, Xiangyang; Christensen, Claus Lynge; Rindel, Jens Holger

2006-01-01

of obtaining the data becomes quite time consuming thus increasing the cost of design. In this paper, practical methods to define scattering coefficients, which is based on an approach of modeling surface scattering and scattering caused by limited size of surface as well as edge diffraction are presented...

9. Variable Fidelity Aeroelastic Toolkit - Structural Model, Phase I

Data.gov (United States)

National Aeronautics and Space Administration — The proposed innovation is a methodology to incorporate variable fidelity structural models into steady and unsteady aeroelastic and aeroservoelastic analyses in...

10. Multi-wheat-model ensemble responses to interannual climatic variability

DEFF Research Database (Denmark)

Ruane, A C; Hudson, N I; Asseng, S

2016-01-01

We compare 27 wheat models' yield responses to interannual climate variability, analyzed at locations in Argentina, Australia, India, and The Netherlands as part of the Agricultural Model Intercomparison and Improvement Project (AgMIP) Wheat Pilot. Each model simulated 1981–2010 grain yield, and ......-term warming, suggesting that additional processes differentiate climate change impacts from observed climate variability analogs and motivating continuing analysis and model development efforts.......We compare 27 wheat models' yield responses to interannual climate variability, analyzed at locations in Argentina, Australia, India, and The Netherlands as part of the Agricultural Model Intercomparison and Improvement Project (AgMIP) Wheat Pilot. Each model simulated 1981–2010 grain yield, and we...... evaluate results against the interannual variability of growing season temperature, precipitation, and solar radiation. The amount of information used for calibration has only a minor effect on most models' climate response, and even small multi-model ensembles prove beneficial. Wheat model clusters reveal...

11. 4. Valorizations of Theoretical Models of Giftedness and Talent in Defining of Artistic Talent

OpenAIRE

Anghel Ionica Ona

2016-01-01

Artistic talent has been defined in various contexts and registers a variety of meanings, more or less operational. From the perspective of pedagogical intervention, it is imperative understanding artistic talent trough the theoretical models of giftedness and talent. So, the aim of the study is to realize a review of the most popular of the theoretical models of giftedness and talent, with identification of the place of artistic talent and the new meanings that artistic talent has in each on...

12. Evaluating measurement of dynamic constructs: defining a measurement model of derivatives.

Science.gov (United States)

Estabrook, Ryne

2015-03-01

While measurement evaluation has been embraced as an important step in psychological research, evaluating measurement structures with longitudinal data is fraught with limitations. This article defines and tests a measurement model of derivatives (MMOD), which is designed to assess the measurement structure of latent constructs both for analyses of between-person differences and for the analysis of change. Simulation results indicate that MMOD outperforms existing models for multivariate analysis and provides equivalent fit to data generation models. Additional simulations show MMOD capable of detecting differences in between-person and within-person factor structures. Model features, applications, and future directions are discussed. (c) 2015 APA, all rights reserved).

13. Modelling the Kampungkota: A quantitative approach in defining Indonesian informal settlements

Science.gov (United States)

Anindito, D. B.; Maula, F. K.; Akbar, R.

2018-02-01

Bandung City is home to 2.5 million inhabitants, some of which are living in slums and squatter. However, the terms conveying this type of housing is not adequate to describe that of Indonesian called as kampungkota. Several studies suggest various variables in constituting kampungkota qualitatively. This study delves to define kampungkota in a quantitative manner, using the characteristics of slums and squatter. The samples for this study are 151 villages (kelurahan) in Bandung City. Ordinary Least Squares, Geographically Weighted Regression, and Spatial Cluster and Outlier Analysis are employed. It is suggested that kampungkota may have distinguished variables regarding to its location. As kampungkota may be smaller than administrative area of kelurahan, it can develop beyond the jurisdiction of kelurahan, as indicated by the clustering pattern of kampungkota.

14. ABOUT PSYCHOLOGICAL VARIABLES IN APPLICATION SCORING MODELS

Directory of Open Access Journals (Sweden)

Pablo Rogers

2015-01-01

Full Text Available The purpose of this study is to investigate the contribution of psychological variables and scales suggested by Economic Psychology in predicting individuals’ default. Therefore, a sample of 555 individuals completed a self-completion questionnaire, which was composed of psychological variables and scales. By adopting the methodology of the logistic regression, the following psychological and behavioral characteristics were found associated with the group of individuals in default: a negative dimensions related to money (suffering, inequality and conflict; b high scores on the self-efficacy scale, probably indicating a greater degree of optimism and over-confidence; c buyers classified as compulsive; d individuals who consider it necessary to give gifts to children and friends on special dates, even though many people consider this a luxury; e problems of self-control identified by individuals who drink an average of more than four glasses of alcoholic beverage a day.

15. Multi-Wheat-Model Ensemble Responses to Interannual Climate Variability

Science.gov (United States)

Ruane, Alex C.; Hudson, Nicholas I.; Asseng, Senthold; Camarrano, Davide; Ewert, Frank; Martre, Pierre; Boote, Kenneth J.; Thorburn, Peter J.; Aggarwal, Pramod K.; Angulo, Carlos

2016-01-01

We compare 27 wheat models' yield responses to interannual climate variability, analyzed at locations in Argentina, Australia, India, and The Netherlands as part of the Agricultural Model Intercomparison and Improvement Project (AgMIP) Wheat Pilot. Each model simulated 1981e2010 grain yield, and we evaluate results against the interannual variability of growing season temperature, precipitation, and solar radiation. The amount of information used for calibration has only a minor effect on most models' climate response, and even small multi-model ensembles prove beneficial. Wheat model clusters reveal common characteristics of yield response to climate; however models rarely share the same cluster at all four sites indicating substantial independence. Only a weak relationship (R2 0.24) was found between the models' sensitivities to interannual temperature variability and their response to long-termwarming, suggesting that additional processes differentiate climate change impacts from observed climate variability analogs and motivating continuing analysis and model development efforts.

16. Stochastic modeling of interannual variation of hydrologic variables

Science.gov (United States)

Dralle, David; Karst, Nathaniel; Müller, Marc; Vico, Giulia; Thompson, Sally E.

2017-07-01

Quantifying the interannual variability of hydrologic variables (such as annual flow volumes, and solute or sediment loads) is a central challenge in hydrologic modeling. Annual or seasonal hydrologic variables are themselves the integral of instantaneous variations and can be well approximated as an aggregate sum of the daily variable. Process-based, probabilistic techniques are available to describe the stochastic structure of daily flow, yet estimating interannual variations in the corresponding aggregated variable requires consideration of the autocorrelation structure of the flow time series. Here we present a method based on a probabilistic streamflow description to obtain the interannual variability of flow-derived variables. The results provide insight into the mechanistic genesis of interannual variability of hydrologic processes. Such clarification can assist in the characterization of ecosystem risk and uncertainty in water resources management. We demonstrate two applications, one quantifying seasonal flow variability and the other quantifying net suspended sediment export.

17. Variable selection in Logistic regression model with genetic algorithm.

Science.gov (United States)

Zhang, Zhongheng; Trevino, Victor; Hoseini, Sayed Shahabuddin; Belciug, Smaranda; Boopathi, Arumugam Manivanna; Zhang, Ping; Gorunescu, Florin; Subha, Velappan; Dai, Songshi

2018-02-01

Variable or feature selection is one of the most important steps in model specification. Especially in the case of medical-decision making, the direct use of a medical database, without a previous analysis and preprocessing step, is often counterproductive. In this way, the variable selection represents the method of choosing the most relevant attributes from the database in order to build a robust learning models and, thus, to improve the performance of the models used in the decision process. In biomedical research, the purpose of variable selection is to select clinically important and statistically significant variables, while excluding unrelated or noise variables. A variety of methods exist for variable selection, but none of them is without limitations. For example, the stepwise approach, which is highly used, adds the best variable in each cycle generally producing an acceptable set of variables. Nevertheless, it is limited by the fact that it commonly trapped in local optima. The best subset approach can systematically search the entire covariate pattern space, but the solution pool can be extremely large with tens to hundreds of variables, which is the case in nowadays clinical data. Genetic algorithms (GA) are heuristic optimization approaches and can be used for variable selection in multivariable regression models. This tutorial paper aims to provide a step-by-step approach to the use of GA in variable selection. The R code provided in the text can be extended and adapted to other data analysis needs.

18. Usability Evaluation of Variability Modeling by means of Common Variability Language

Directory of Open Access Journals (Sweden)

Jorge Echeverria

2015-12-01

Full Text Available Common Variability Language (CVL is a recent proposal for OMG's upcoming Variability Modeling standard. CVL models variability in terms of Model Fragments.  Usability is a widely-recognized quality criterion essential to warranty the successful use of tools that put these ideas in practice. Facing the need of evaluating the usability of CVL modeling tools, this paper presents a Usability Evaluation of CVL applied to a Modeling Tool for firmware code of Induction Hobs. This evaluation addresses the configuration, scoping and visualization facets. The evaluation involved the end users of the tool whom are engineers of our Induction Hob industrial partner. Effectiveness and efficiency results indicate that model configuration in terms of model fragment substitutions is intuitive enough but both scoping and visualization require improved tool support. Results also enabled us to identify a list of usability problems which may contribute to alleviate scoping and visualization issues in CVL.

19. Coevolution of variability models and related software artifacts

DEFF Research Database (Denmark)

Passos, Leonardo; Teixeira, Leopoldo; Dinztner, Nicolas

2015-01-01

models coevolve with other artifact types, we study a large and complex real-world variant-rich software system: the Linux kernel. Specifically, we extract variability-coevolution patterns capturing changes in the variability model of the Linux kernel with subsequent changes in Makefiles and C source...

20. Modeling first impressions from highly variable facial images.

Science.gov (United States)

Vernon, Richard J W; Sutherland, Clare A M; Young, Andrew W; Hartley, Tom

2014-08-12

First impressions of social traits, such as trustworthiness or dominance, are reliably perceived in faces, and despite their questionable validity they can have considerable real-world consequences. We sought to uncover the information driving such judgments, using an attribute-based approach. Attributes (physical facial features) were objectively measured from feature positions and colors in a database of highly variable "ambient" face photographs, and then used as input for a neural network to model factor dimensions (approachability, youthful-attractiveness, and dominance) thought to underlie social attributions. A linear model based on this approach was able to account for 58% of the variance in raters' impressions of previously unseen faces, and factor-attribute correlations could be used to rank attributes by their importance to each factor. Reversing this process, neural networks were then used to predict facial attributes and corresponding image properties from specific combinations of factor scores. In this way, the factors driving social trait impressions could be visualized as a series of computer-generated cartoon face-like images, depicting how attributes change along each dimension. This study shows that despite enormous variation in ambient images of faces, a substantial proportion of the variance in first impressions can be accounted for through linear changes in objectively defined features.

1. Clinical prediction in defined populations: a simulation study investigating when and how to aggregate existing models

Directory of Open Access Journals (Sweden)

Glen P. Martin

2017-01-01

Full Text Available Abstract Background Clinical prediction models (CPMs are increasingly deployed to support healthcare decisions but they are derived inconsistently, in part due to limited data. An emerging alternative is to aggregate existing CPMs developed for similar settings and outcomes. This simulation study aimed to investigate the impact of between-population-heterogeneity and sample size on aggregating existing CPMs in a defined population, compared with developing a model de novo. Methods Simulations were designed to mimic a scenario in which multiple CPMs for a binary outcome had been derived in distinct, heterogeneous populations, with potentially different predictors available in each. We then generated a new ‘local’ population and compared the performance of CPMs developed for this population by aggregation, using stacked regression, principal component analysis or partial least squares, with redevelopment from scratch using backwards selection and penalised regression. Results While redevelopment approaches resulted in models that were miscalibrated for local datasets of less than 500 observations, model aggregation methods were well calibrated across all simulation scenarios. When the size of local data was less than 1000 observations and between-population-heterogeneity was small, aggregating existing CPMs gave better discrimination and had the lowest mean square error in the predicted risks compared with deriving a new model. Conversely, given greater than 1000 observations and significant between-population-heterogeneity, then redevelopment outperformed the aggregation approaches. In all other scenarios, both aggregation and de novo derivation resulted in similar predictive performance. Conclusion This study demonstrates a pragmatic approach to contextualising CPMs to defined populations. When aiming to develop models in defined populations, modellers should consider existing CPMs, with aggregation approaches being a suitable modelling

2. User Defined Data in the New Analysis Model of the BaBar Experiment

Energy Technology Data Exchange (ETDEWEB)

De Nardo, G.

2005-04-06

The BaBar experiment has recently revised its Analysis Model. One of the key ingredient of BaBar new Analysis Model is the support of the capability to add to the Event Store user defined data, which can be the output of complex computations performed at an advanced stage of a physics analysis, and are associated to analysis objects. In order to provide flexibility and extensibility with respect to object types, template generic programming has been adopted. In this way the model is non-intrusive with respect to reconstruction and analysis objects it manages, not requiring changes in their interfaces and implementations. Technological details are hidden as much as possible to the user, providing a simple interface. In this paper we present some of the limitations of the old model and how they are addressed by the new Analysis Model.

3. Variable amplitude fatigue, modelling and testing

International Nuclear Information System (INIS)

Svensson, Thomas.

1993-01-01

Problems related to metal fatigue modelling and testing are here treated in four different papers. In the first paper different views of the subject are summarised in a literature survey. In the second paper a new model for fatigue life is investigated. Experimental results are established which are promising for further development of the mode. In the third paper a method is presented that generates a stochastic process, suitable to fatigue testing. The process is designed in order to resemble certain fatigue related features in service life processes. In the fourth paper fatigue problems in transport vibrations are treated

4. ABA versus TEACCH: the case for defining and validating comprehensive treatment models in autism.

Science.gov (United States)

Callahan, Kevin; Shukla-Mehta, Smita; Magee, Sandy; Wie, Min

2010-01-01

The authors analyzed the results of a social validation survey to determine if autism service providers including special education teachers, parents, and administrators demonstrate a preference for the intervention components of Applied Behavior Analysis or Training and Education of Autistic and other Communication Handicapped Children. They also investigated the comprehensiveness of these treatment models for use in public school programs. The findings indicate no clear preference for either model, but a significantly higher level of social validity for components inherent in both approaches. The authors discuss the need for research to define what is meant by comprehensive programming in autism.

5. Software-defined networking model for smart transformers with ISO/IEC/IEEE 21451 sensors

Directory of Open Access Journals (Sweden)

Longhua Guo

2017-06-01

Full Text Available The advanced IEC 61850 smart transformer has shown an improved performance in monitoring, controlling, and protecting the equipment in smart substations. However, heterogeneity, feasibility, and network control problems have limited the smart transformer’s performance in networks. To address these issues, a software-defined networking model was proposed using ISO/IEC/IEEE 21451 networks. An IEC-61850-based network controller was designed as a new kind of intelligent electrical device (IED. The proposed data and information models enhanced the network awareness ability and facilitated the access of smart sensors in transformer to communication networks. The performance evaluation results showed an improved efficiency.

6. Leveraging healthcare utilization to explore outcomes from musculoskeletal disorders: methodology for defining relevant variables from a health services data repository.

Science.gov (United States)

Rhon, Daniel I; Clewley, Derek; Young, Jodi L; Sissel, Charles D; Cook, Chad E

2018-01-31

Large healthcare databases, with their ability to collect many variables from daily medical practice, greatly enable health services research. These longitudinal databases provide large cohorts and longitudinal time frames, allowing for highly pragmatic assessment of healthcare delivery. The purpose of this paper is to discuss the methodology related to the use of the United States Military Health System Data Repository (MDR) for longitudinal assessment of musculoskeletal clinical outcomes, as well as address challenges of using this data for outcomes research. The Military Health System manages care for approximately 10 million beneficiaries worldwide. Multiple data sources pour into the MDR from multiple levels of care (inpatient, outpatient, military or civilian facility, combat theater, etc.) at the individual patient level. To provide meaningful and descriptive coding for longitudinal analysis, specific coding for timing and type of care, procedures, medications, and provider type must be performed. Assumptions often made in clinical trials do not apply to these cohorts, requiring additional steps in data preparation to reduce risk of bias. The MDR has a robust system in place to validate the quality and accuracy of its data, reducing risk of analytic error. Details for making this data suitable for analysis of longitudinal orthopaedic outcomes are provided. Although some limitations exist, proper preparation and understanding of the data can limit bias, and allow for robust and meaningful analyses. There is the potential for strong precision, as well as the ability to collect a wide range of variables in very large groups of patients otherwise not captured in traditional clinical trials. This approach contributes to the improved understanding of the accessibility, quality, and cost of care for those with orthopaedic conditions. The MDR provides a robust pool of longitudinal healthcare data at the person-level. The benefits of using the MDR database appear to

7. Circular Business Models: Defining a Concept and Framing an Emerging Research Field

Directory of Open Access Journals (Sweden)

Julia L. K. Nußholz

2017-10-01

8. Linear latent variable models: the lava-package

DEFF Research Database (Denmark)

Holst, Klaus Kähler; Budtz-Jørgensen, Esben

2013-01-01

An R package for specifying and estimating linear latent variable models is presented. The philosophy of the implementation is to separate the model specification from the actual data, which leads to a dynamic and easy way of modeling complex hierarchical structures. Several advanced features...... are implemented including robust standard errors for clustered correlated data, multigroup analyses, non-linear parameter constraints, inference with incomplete data, maximum likelihood estimation with censored and binary observations, and instrumental variable estimators. In addition an extensive simulation...

9. Defining metrics of the Quasi-Biennial Oscillation in global climate models

Directory of Open Access Journals (Sweden)

V. Schenzinger

2017-06-01

Full Text Available As the dominant mode of variability in the tropical stratosphere, the Quasi-Biennial Oscillation (QBO has been subject to extensive research. Though there is a well-developed theory of this phenomenon being forced by wave–mean flow interaction, simulating the QBO adequately in global climate models still remains difficult. This paper presents a set of metrics to characterize the morphology of the QBO using a number of different reanalysis datasets and the FU Berlin radiosonde observation dataset. The same metrics are then calculated from Coupled Model Intercomparison Project 5 and Chemistry-Climate Model Validation Activity 2 simulations which included a representation of QBO-like behaviour to evaluate which aspects of the QBO are well captured by the models and which ones remain a challenge for future model development.

10. A Polynomial Term Structure Model with Macroeconomic Variables

Directory of Open Access Journals (Sweden)

José Valentim Vicente

2007-06-01

Full Text Available Recently, a myriad of factor models including macroeconomic variables have been proposed to analyze the yield curve. We present an alternative factor model where term structure movements are captured by Legendre polynomials mimicking the statistical factor movements identified by Litterman e Scheinkmam (1991. We estimate the model with Brazilian Foreign Exchange Coupon data, adopting a Kalman filter, under two versions: the first uses only latent factors and the second includes macroeconomic variables. We study its ability to predict out-of-sample term structure movements, when compared to a random walk. We also discuss results on the impulse response function of macroeconomic variables.

11. Psychosocial and demographic variables associated with consumer intention to purchase sustainably produced foods as defined by the Midwest Food Alliance.

Science.gov (United States)

Robinson, Ramona; Smith, Chery

2002-01-01

To examine psychosocial and demographic variables associated with consumer intention to purchase sustainably produced foods using an expanded Theory of Planned Behavior. Consumers were approached at the store entrance and asked to complete a self-administered survey. Three metropolitan Minnesota grocery stores. Participants (n = 550) were adults who shopped at the store: the majority were white, female, and highly educated and earned >or= 50,000 dollars/year. Participation rates averaged 62%. The major domain investigated was consumer support for sustainably produced foods. Demographics, beliefs, attitudes, subjective norm, and self-identity and perceived behavioral control were evaluated as predictors of intention to purchase them. Descriptive statistics, independent t tests, one-way analysis of variance, Pearson product moment correlation coefficients, and stepwise multiple regression analyses (P Consumers were supportive of sustainably produced foods but not highly confident in their ability to purchase them. Independent predictors of intention to purchase them included attitudes, beliefs, perceived behavioral control, subjective norm, past buying behavior, and marital status. Beliefs, attitudes, and confidence level may influence intention to purchase sustainably produced foods. Nutrition educators could increase consumers' awareness of sustainably produced foods by understanding their beliefs, attitudes, and confidence levels.

12. Defining pharmacy and its practice: a conceptual model for an international audience

Directory of Open Access Journals (Sweden)

Scahill SL

2017-05-01

Full Text Available SL Scahill,1 M Atif,2 ZU Babar3,4 1School of Management, Massey Business School, Massey University, Albany, Auckland, New Zealand; 2Pharmacy School, The Islamia University of Bahawalpur, Bahawalpur, Pakistan; 3School of Pharmacy, University of Huddersfield, Huddersfield, England, UK; 4School of Pharmacy, Faculty of Medical and Health Sciences, University of Auckland, Auckland, New Zealand Background: There is much fragmentation and little consensus in the use of descriptors for the different disciplines that make up the pharmacy sector. Globalization, reprofessionalization and the influx of other disciplines means there is a requirement for a greater degree of standardization. This has not been well addressed in the pharmacy practice research and education literature. Objectives: To identify and define the various subdisciplines of the pharmacy sector and integrate them into an internationally relevant conceptual model based on narrative synthesis of the literature. Methods: A literature review was undertaken to understand the fragmentation in dialogue surrounding definitions relating to concepts and practices in the context of the pharmacy sector. From a synthesis of this literature, the need for this model was justified. Key assumptions of the model were identified, and an organic process of development took place with the three authors engaging in a process of sense-making to theorize the model. Results: The model is “fit for purpose” across multiple countries and includes two components making up the umbrella term “pharmaceutical practice”. The first component is the four conceptual dimensions, which outline the disciplines including social and administrative sciences, community pharmacy, clinical pharmacy and pharmaceutical sciences. The second component of the model describes the “acts of practice”: teaching, research and professional advocacy; service and academic enterprise. Conclusions: This model aims to expose issues

13. Selecting candidate predictor variables for the modelling of post ...

African Journals Online (AJOL)

Selecting candidate predictor variables for the modelling of post-discharge mortality from sepsis: a protocol development project. Afri. Health Sci. .... Initial list of candidate predictor variables, N=17. Clinical. Laboratory. Social/Demographic. Vital signs (HR, RR, BP, T). Hemoglobin. Age. Oxygen saturation. Blood culture. Sex.

14. Variable-Structure Control of a Model Glider Airplane

Science.gov (United States)

Waszak, Martin R.; Anderson, Mark R.

2008-01-01

A variable-structure control system designed to enable a fuselage-heavy airplane to recover from spin has been demonstrated in a hand-launched, instrumented model glider airplane. Variable-structure control is a high-speed switching feedback control technique that has been developed for control of nonlinear dynamic systems.

15. Interdecadal variability in a global coupled model

International Nuclear Information System (INIS)

Storch, J.S. von.

1994-01-01

Interdecadal variations are studied in a 325-year simulation performed by a coupled atmosphere - ocean general circulation model. The patterns obtained in this study may be considered as characteristic patterns for interdecadal variations. 1. The atmosphere: Interdecadal variations have no preferred time scales, but reveal well-organized spatial structures. They appear as two modes, one is related with variations of the tropical easterlies and the other with the Southern Hemisphere westerlies. Both have red spectra. The amplitude of the associated wind anomalies is largest in the upper troposphere. The associated temperature anomalies are in thermal-wind balance with the zonal winds and are out-of-phase between the troposphere and the lower stratosphere. 2. The Pacific Ocean: The dominant mode in the Pacific appears to be wind-driven in the midlatitudes and is related to air-sea interaction processes during one stage of the oscillation in the tropics. Anomalies of this mode propagate westward in the tropics and the northward (southwestward) in the North (South) Pacific on a time scale of about 10 to 20 years. (orig.)

16. Using Enthalpy as a Prognostic Variable in Atmospheric Modelling with Variable Composition

Science.gov (United States)

2016-04-14

InterScience (www.interscience.wiley.com) DOI: 10.1002/qj.345 Using enthalpy as a prognostic variable in atmospheric modelling with variable composition† R...Maryland, USA cNow at NOAA/NCEP, Space Weather Prediction Centre, Boulder, Colorado, USA ABSTRACT: Specific enthalpy emerges from a general form of the...trajectories depend- ing on sources, sinks, and fluxes of individual tracers. Specific enthalpy , h = cpT , (1) where cp is the specific heat capacity at

17. Modeling Psychological Attributes in Psychology – An Epistemological Discussion: Network Analysis vs. Latent Variables

Science.gov (United States)

Guyon, Hervé; Falissard, Bruno; Kop, Jean-Luc

2017-01-01

18. Multiple Imputation of Predictor Variables Using Generalized Additive Models

NARCIS (Netherlands)

de Jong, Roel; van Buuren, Stef; Spiess, Martin

2016-01-01

The sensitivity of multiple imputation methods to deviations from their distributional assumptions is investigated using simulations, where the parameters of scientific interest are the coefficients of a linear regression model, and values in predictor variables are missing at random. The

19. Higher-dimensional cosmological model with variable gravitational ...

com. MS received 9 February 2004; revised 19 June 2004; accepted 12 August 2004. Abstract. We have studied five-dimensional homogeneous cosmological models with variable G and bulk viscosity in Lyra geometry. Exact solutions for the field ...

20. Higher-dimensional cosmological model with variable gravitational ...

We have studied five-dimensional homogeneous cosmological models with variable and bulk viscosity in Lyra geometry. Exact solutions for the field equations have been obtained and physical properties of the models are discussed. It has been observed that the results of new models are well within the observational ...

1. A calibration hierarchy for risk models was defined: from utopia to empirical data.

Science.gov (United States)

Van Calster, Ben; Nieboer, Daan; Vergouwe, Yvonne; De Cock, Bavo; Pencina, Michael J; Steyerberg, Ewout W

2016-06-01

Calibrated risk models are vital for valid decision support. We define four levels of calibration and describe implications for model development and external validation of predictions. We present results based on simulated data sets. A common definition of calibration is "having an event rate of R% among patients with a predicted risk of R%," which we refer to as "moderate calibration." Weaker forms of calibration only require the average predicted risk (mean calibration) or the average prediction effects (weak calibration) to be correct. "Strong calibration" requires that the event rate equals the predicted risk for every covariate pattern. This implies that the model is fully correct for the validation setting. We argue that this is unrealistic: the model type may be incorrect, the linear predictor is only asymptotically unbiased, and all nonlinear and interaction effects should be correctly modeled. In addition, we prove that moderate calibration guarantees nonharmful decision making. Finally, results indicate that a flexible assessment of calibration in small validation data sets is problematic. Strong calibration is desirable for individualized decision support but unrealistic and counter productive by stimulating the development of overly complex models. Model development and external validation should focus on moderate calibration. Copyright © 2016 Elsevier Inc. All rights reserved.

2. A variable-order fractal derivative model for anomalous diffusion

Directory of Open Access Journals (Sweden)

Liu Xiaoting

2017-01-01

Full Text Available This paper pays attention to develop a variable-order fractal derivative model for anomalous diffusion. Previous investigations have indicated that the medium structure, fractal dimension or porosity may change with time or space during solute transport processes, results in time or spatial dependent anomalous diffusion phenomena. Hereby, this study makes an attempt to introduce a variable-order fractal derivative diffusion model, in which the index of fractal derivative depends on temporal moment or spatial position, to characterize the above mentioned anomalous diffusion (or transport processes. Compared with other models, the main advantages in description and the physical explanation of new model are explored by numerical simulation. Further discussions on the dissimilitude such as computational efficiency, diffusion behavior and heavy tail phenomena of the new model and variable-order fractional derivative model are also offered.

3. A Non-Gaussian Spatial Generalized Linear Latent Variable Model

KAUST Repository

Irincheeva, Irina

2012-08-03

We consider a spatial generalized linear latent variable model with and without normality distributional assumption on the latent variables. When the latent variables are assumed to be multivariate normal, we apply a Laplace approximation. To relax the assumption of marginal normality in favor of a mixture of normals, we construct a multivariate density with Gaussian spatial dependence and given multivariate margins. We use the pairwise likelihood to estimate the corresponding spatial generalized linear latent variable model. The properties of the resulting estimators are explored by simulations. In the analysis of an air pollution data set the proposed methodology uncovers weather conditions to be a more important source of variability than air pollution in explaining all the causes of non-accidental mortality excluding accidents. © 2012 International Biometric Society.

4. Classification criteria of syndromes by latent variable models

DEFF Research Database (Denmark)

Petersen, Janne

2010-01-01

are shown to be superior depending on whether the latent variable is a dependent or an independent variable. Both these types of scores are extended to the situation of differential item functioning. Analytically I have showed that the scores result in consistent estimates when used properly in subsequent...... of the syndrome. Thus, the results suggested that peripheral lipoatrophy and central lipohypertophy are interrelated phenotypes rather than two independent phenotypes. Part 2: Latent class regression relates explanatory variables to latent classes. In this model no measure of the latent class variable is obtained......The thesis has two parts; one clinical part: studying the dimensions of human immunodeficiency virus associated lipodystrophy syndrome (HALS) by latent class models, and a more statistical part: investigating how to predict scores of latent variables so these can be used in subsequent regression...

5. Possibility/Necessity-Based Probabilistic Expectation Models for Linear Programming Problems with Discrete Fuzzy Random Variables

Directory of Open Access Journals (Sweden)

Hideki Katagiri

2017-10-01

Full Text Available This paper considers linear programming problems (LPPs where the objective functions involve discrete fuzzy random variables (fuzzy set-valued discrete random variables. New decision making models, which are useful in fuzzy stochastic environments, are proposed based on both possibility theory and probability theory. In multi-objective cases, Pareto optimal solutions of the proposed models are newly defined. Computational algorithms for obtaining the Pareto optimal solutions of the proposed models are provided. It is shown that problems involving discrete fuzzy random variables can be transformed into deterministic nonlinear mathematical programming problems which can be solved through a conventional mathematical programming solver under practically reasonable assumptions. A numerical example of agriculture production problems is given to demonstrate the applicability of the proposed models to real-world problems in fuzzy stochastic environments.

6. Defining Building Information Modeling implementation activities based on capability maturity evaluation: a theoretical model

Directory of Open Access Journals (Sweden)

Romain Morlhon

2015-01-01

Full Text Available Building Information Modeling (BIM has become a widely accepted tool to overcome the many hurdles that currently face the Architecture, Engineering and Construction industries. However, implementing such a system is always complex and the recent introduction of BIM does not allow organizations to build their experience on acknowledged standards and procedures. Moreover, data on implementation projects is still disseminated and fragmentary. The objective of this study is to develop an assistance model for BIM implementation. Solutions that are proposed will help develop BIM that is better integrated and better used, and take into account the different maturity levels of each organization. Indeed, based on Critical Success Factors, concrete activities that help in implementation are identified and can be undertaken according to the previous maturity evaluation of an organization. The result of this research consists of a structured model linking maturity, success factors and actions, which operates on the following principle: once an organization has assessed its BIM maturity, it can identify various weaknesses and find relevant answers in the success factors and the associated actions.

7. On the ""early-time"" evolution of variables relevant to turbulence models for the Rayleigh-Taylor instability

Energy Technology Data Exchange (ETDEWEB)

Rollin, Bertrand [Los Alamos National Laboratory; Andrews, Malcolm J [Los Alamos National Laboratory

2010-01-01

We present our progress toward setting initial conditions in variable density turbulence models. In particular, we concentrate our efforts on the BHR turbulence model for turbulent Rayleigh-Taylor instability. Our approach is to predict profiles of relevant variables before fully turbulent regime and use them as initial conditions for the turbulence model. We use an idealized model of mixing between two interpenetrating fluids to define the initial profiles for the turbulence model variables. Velocities and volume fractions used in the idealized mixing model are obtained respectively from a set of ordinary differential equations modeling the growth of the Rayleigh-Taylor instability and from an idealization of the density profile in the mixing layer. A comparison between predicted profiles for the turbulence model variables and profiles of the variables obtained from low Atwood number three dimensional simulations show reasonable agreement.

8. Simulation of quantitative characters by genes with biochemically definable action. VI. Modifications of a simple model.

Science.gov (United States)

Forkmann, G; Seyffert, W

1977-03-01

Investigations on metric characters of defined genotypes of Matthiola incana, and application of different linear models for the estimation of genetic parameters, indicate that the use of midparental value as a reference point results in parameter estimates that do not correspond to the actual biological situation. Use of the most recessive genotype as a reference point causes all of the contributions of single loci to be undirectional and positive, and all the allelic and nonallelic interactions to be unidirectional and negative, in accord with our Model 2.2. The results indicate that the phenotypic response to allelic substitutions follows the characteristics of a saturation curve. The possibility is discussed that the saturation character results from regulating processes, whereas deviations of single measurements from the response curve, or response surface, reflect real interactions between allelic and nonallelic genes.

9. Bayesian approach to errors-in-variables in regression models

Science.gov (United States)

2017-05-01

In many applications and experiments, data sets are often contaminated with error or mismeasured covariates. When at least one of the covariates in a model is measured with error, Errors-in-Variables (EIV) model can be used. Measurement error, when not corrected, would cause misleading statistical inferences and analysis. Therefore, our goal is to examine the relationship of the outcome variable and the unobserved exposure variable given the observed mismeasured surrogate by applying the Bayesian formulation to the EIV model. We shall extend the flexible parametric method proposed by Hossain and Gustafson (2009) to another nonlinear regression model which is the Poisson regression model. We shall then illustrate the application of this approach via a simulation study using Markov chain Monte Carlo sampling methods.

10. Model and Variable Selection Procedures for Semiparametric Time Series Regression

Directory of Open Access Journals (Sweden)

Risa Kato

2009-01-01

Full Text Available Semiparametric regression models are very useful for time series analysis. They facilitate the detection of features resulting from external interventions. The complexity of semiparametric models poses new challenges for issues of nonparametric and parametric inference and model selection that frequently arise from time series data analysis. In this paper, we propose penalized least squares estimators which can simultaneously select significant variables and estimate unknown parameters. An innovative class of variable selection procedure is proposed to select significant variables and basis functions in a semiparametric model. The asymptotic normality of the resulting estimators is established. Information criteria for model selection are also proposed. We illustrate the effectiveness of the proposed procedures with numerical simulations.

11. Sensitivity analysis as an aid in modelling and control of (poorly-defined) ecological systems. [closed ecological systems

Science.gov (United States)

Hornberger, G. M.; Rastetter, E. B.

1982-01-01

A literature review of the use of sensitivity analyses in modelling nonlinear, ill-defined systems, such as ecological interactions is presented. Discussions of previous work, and a proposed scheme for generalized sensitivity analysis applicable to ill-defined systems are included. This scheme considers classes of mathematical models, problem-defining behavior, analysis procedures (especially the use of Monte-Carlo methods), sensitivity ranking of parameters, and extension to control system design.

12. Financial applications of a Tabu search variable selection model

Directory of Open Access Journals (Sweden)

Zvi Drezner

2001-01-01

Full Text Available We illustrate how a comparatively new technique, a Tabu search variable selection model [Drezner, Marcoulides and Salhi (1999], can be applied efficiently within finance when the researcher must select a subset of variables from among the whole set of explanatory variables under consideration. Several types of problems in finance, including corporate and personal bankruptcy prediction, mortgage and credit scoring, and the selection of variables for the Arbitrage Pricing Model, require the researcher to select a subset of variables from a larger set. In order to demonstrate the usefulness of the Tabu search variable selection model, we: (1 illustrate its efficiency in comparison to the main alternative search procedures, such as stepwise regression and the Maximum R2 procedure, and (2 show how a version of the Tabu search procedure may be implemented when attempting to predict corporate bankruptcy. We accomplish (2 by indicating that a Tabu Search procedure increases the predictability of corporate bankruptcy by up to 10 percentage points in comparison to Altman's (1968 Z-Score model.

13. Variable selection for mixture and promotion time cure rate models.

Science.gov (United States)

Masud, Abdullah; Tu, Wanzhu; Yu, Zhangsheng

2016-11-16

Failure-time data with cured patients are common in clinical studies. Data from these studies are typically analyzed with cure rate models. Variable selection methods have not been well developed for cure rate models. In this research, we propose two least absolute shrinkage and selection operators based methods, for variable selection in mixture and promotion time cure models with parametric or nonparametric baseline hazards. We conduct an extensive simulation study to assess the operating characteristics of the proposed methods. We illustrate the use of the methods using data from a study of childhood wheezing. © The Author(s) 2016.

14. Interacting ghost dark energy models with variable G and Λ

Science.gov (United States)

Sadeghi, J.; Khurshudyan, M.; Movsisyan, A.; Farahani, H.

2013-12-01

In this paper we consider several phenomenological models of variable Λ. Model of a flat Universe with variable Λ and G is accepted. It is well known, that varying G and Λ gives rise to modified field equations and modified conservation laws, which gives rise to many different manipulations and assumptions in literature. We will consider two component fluid, which parameters will enter to Λ. Interaction between fluids with energy densities ρ1 and ρ2 assumed as Q = 3Hb(ρ1+ρ2). We have numerical analyze of important cosmological parameters like EoS parameter of the composed fluid and deceleration parameter q of the model.

15. Model for expressing leaf photosynthesis in terms of weather variables

African Journals Online (AJOL)

A theoretical mathematical model for describing photosynthesis in individual leaves in terms of weather variables is proposed. The model utilizes a series of efficiency parameters, each of which reflect the fraction of potential photosynthetic rate permitted by the different environmental elements. These parameters are useful ...

16. Simple model for crop photosynthesis in terms of weather variables ...

African Journals Online (AJOL)

A theoretical mathematical model for describing crop photosynthetic rate in terms of the weather variables and crop characteristics is proposed. The model utilizes a series of efficiency parameters, each of which reflect the fraction of possible photosynthetic rate permitted by the different weather elements or crop architecture.

17. Bayesian variable order Markov models: Towards Bayesian predictive state representations

NARCIS (Netherlands)

Dimitrakakis, C.

2009-01-01

We present a Bayesian variable order Markov model that shares many similarities with predictive state representations. The resulting models are compact and much easier to specify and learn than classical predictive state representations. Moreover, we show that they significantly outperform a more

18. Modeling, analysis and control of a variable geometry actuator

NARCIS (Netherlands)

Evers, W.J.; Knaap, A. van der; Besselink, I.J.M.; Nijmeijer, H.

2008-01-01

A new design of variable geometry force actuator is presented in this paper. Based upon this design, a model is derived which is used for steady-state analysis, as well as controller design in the presence of friction. The controlled actuator model is finally used to evaluate the power consumption

19. Quantitative Analysis of the Security of Software-Defined Network Controller Using Threat/Effort Model

Directory of Open Access Journals (Sweden)

Zehui Wu

2017-01-01

Full Text Available SDN-based controller, which is responsible for the configuration and management of the network, is the core of Software-Defined Networks. Current methods, which focus on the secure mechanism, use qualitative analysis to estimate the security of controllers, leading to inaccurate results frequently. In this paper, we employ a quantitative approach to overcome the above shortage. Under the analysis of the controller threat model we give the formal model results of the APIs, the protocol interfaces, and the data items of controller and further provide our Threat/Effort quantitative calculation model. With the help of Threat/Effort model, we are able to compare not only the security of different versions of the same kind controller but also different kinds of controllers and provide a basis for controller selection and secure development. We evaluated our approach in four widely used SDN-based controllers which are POX, OpenDaylight, Floodlight, and Ryu. The test, which shows the similarity outcomes with the traditional qualitative analysis, demonstrates that with our approach we are able to get the specific security values of different controllers and presents more accurate results.

20. Myelogenous leukemia in adult inbred MHC-defined miniature swine: a model for human myeloid leukemias.

Science.gov (United States)

Duran-Struuck, Raimon; Cho, Patricia S; Teague, Alexander G S; Fishman, Brian; Fishman, Aaron S; Hanekamp, John S; Moran, Shannon G; Wikiel, Krzysztof J; Ferguson, Kelly K; Lo, Diana P; Duggan, Michael; Arn, J Scott; Billiter, Bob; Horner, Ben; Houser, Stuart; Yeap, Beow Yong; Westmoreland, Susan V; Spitzer, Thomas R; McMorrow, Isabel M; Sachs, David H; Bronson, Roderick T; Huang, Christene A

2010-06-15

This manuscript reports on five cases of spontaneous myelogenous leukemia, similar to human disease, occurring within highly inbred, histocompatible sublines of Massachusetts General Hospital (MGH) MHC-defined miniature swine. In cases where a neoplasm was suspected based on clinical observations, samples were obtained for complete blood count, peripheral blood smear, and flow cytometric analysis. Animals confirmed to have neoplasms were euthanized and underwent necropsy. Histological samples were obtained from abnormal tissues and suspect lesions. The phenotype of the malignancies was assessed by flow cytometric analysis of processed peripheral blood mononuclear cells and affected tissues. Five cases of spontaneous myeloid leukemia were identified in adult animals older than 30 months of age. All animals presented with symptoms of weight loss, lethargy, and marked leukocytosis. At autopsy, all animals had systemic disease involvement and presented with severe hepatosplenomegaly. Three of the five myelogenous leukemias have successfully been expanded in vitro. The clustered incidence of disease in this closed herd suggests that genetic factors may be contributing to disease development. Myelogenous leukemia cell lines established from inbred sublines of MGH MHC-defined miniature swine have the potential to be utilized as a model to evaluate therapies of human leukemia. Copyright 2009 Elsevier B.V. All rights reserved.

1. Understanding and forecasting polar stratospheric variability with statistical models

Directory of Open Access Journals (Sweden)

C. Blume

2012-07-01

Full Text Available The variability of the north-polar stratospheric vortex is a prominent aspect of the middle atmosphere. This work investigates a wide class of statistical models with respect to their ability to model geopotential and temperature anomalies, representing variability in the polar stratosphere. Four partly nonstationary, nonlinear models are assessed: linear discriminant analysis (LDA; a cluster method based on finite elements (FEM-VARX; a neural network, namely the multi-layer perceptron (MLP; and support vector regression (SVR. These methods model time series by incorporating all significant external factors simultaneously, including ENSO, QBO, the solar cycle, volcanoes, to then quantify their statistical importance. We show that variability in reanalysis data from 1980 to 2005 is successfully modeled. The period from 2005 to 2011 can be hindcasted to a certain extent, where MLP performs significantly better than the remaining models. However, variability remains that cannot be statistically hindcasted within the current framework, such as the unexpected major warming in January 2009. Finally, the statistical model with the best generalization performance is used to predict a winter 2011/12 with warm and weak vortex conditions. A vortex breakdown is predicted for late January, early February 2012.

2. Cross-country transferability of multi-variable damage models

Science.gov (United States)

Wagenaar, Dennis; Lüdtke, Stefan; Kreibich, Heidi; Bouwer, Laurens

2017-04-01

Flood damage assessment is often done with simple damage curves based only on flood water depth. Additionally, damage models are often transferred in space and time, e.g. from region to region or from one flood event to another. Validation has shown that depth-damage curve estimates are associated with high uncertainties, particularly when applied in regions outside the area where the data for curve development was collected. Recently, progress has been made with multi-variable damage models created with data-mining techniques, i.e. Bayesian Networks and random forest. However, it is still unknown to what extent and under which conditions model transfers are possible and reliable. Model validations in different countries will provide valuable insights into the transferability of multi-variable damage models. In this study we compare multi-variable models developed on basis of flood damage datasets from Germany as well as from The Netherlands. Data from several German floods was collected using computer aided telephone interviews. Data from the 1993 Meuse flood in the Netherlands is available, based on compensations paid by the government. The Bayesian network and random forest based models are applied and validated in both countries on basis of the individual datasets. A major challenge was the harmonization of the variables between both datasets due to factors like differences in variable definitions, and regional and temporal differences in flood hazard and exposure characteristics. Results of model validations and comparisons in both countries are discussed, particularly in respect to encountered challenges and possible solutions for an improvement of model transferability.

3. A hydrological modeling framework for defining achievable performance standards for pesticides.

Science.gov (United States)

Rousseau, Alain N; Lafrance, Pierre; Lavigne, Martin-Pierre; Savary, Stéphane; Konan, Brou; Quilbé, Renaud; Jiapizian, Paul; Amrani, Mohamed

2012-01-01

This paper proposes a hydrological modeling framework to define achievable performance standards (APSs) for pesticides that could be attained after implementation of recommended management actions, agricultural practices, and available technologies (i.e., beneficial management practices [BMPs]). An integrated hydrological modeling system, Gestion Intégrée des Bassins versants à l'aide d'un Système Informatisé, was used to quantify APSs for six Canadian watersheds for eight pesticides: atrazine, carbofuran, dicamba, glyphosate, MCPB, MCPA, metolachlor, and 2,4-D. Outputs from simulation runs to predict pesticide concentration under current conditions and in response to implementation of two types of beneficial management practices (reduced pesticide application rate and 1- to 10-m-wide edge-of-field and/or riparian buffer strips, implemented singly or in combination) showed that APS values for scenarios with BMPs were less than those for current conditions. Moreover, APS values at the outlet of watersheds were usually less than ecological thresholds of good condition, when available. Upstream river reaches were at greater risk of having concentrations above a given ecological thresholds because of limited stream flows and overland loads of pesticides. Our integrated approach of "hydrological modeling-APS estimation-ecotoxicological significance" provides the most effective interpretation possible, for management and education purposes, of the potential biological impact of predicted pesticide concentrations in rivers. Copyright © by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.

4. Mediterranean climate modelling: variability and climate change scenarios

International Nuclear Information System (INIS)

Somot, S.

2005-12-01

Air-sea fluxes, open-sea deep convection and cyclo-genesis are studied in the Mediterranean with the development of a regional coupled model (AORCM). It accurately simulates these processes and their climate variabilities are quantified and studied. The regional coupling shows a significant impact on the number of winter intense cyclo-genesis as well as on associated air-sea fluxes and precipitation. A lower inter-annual variability than in non-coupled models is simulated for fluxes and deep convection. The feedbacks driving this variability are understood. The climate change response is then analysed for the 21. century with the non-coupled models: cyclo-genesis decreases, associated precipitation increases in spring and autumn and decreases in summer. Moreover, a warming and salting of the Mediterranean as well as a strong weakening of its thermohaline circulation occur. This study also concludes with the necessity of using AORCMs to assess climate change impacts on the Mediterranean. (author)

5. Classification criteria of syndromes by latent variable models

DEFF Research Database (Denmark)

Petersen, Janne

2010-01-01

analyses. Part 1: HALS engages different phenotypic changes of peripheral lipoatrophy and central lipohypertrophy.  There are several different definitions of HALS and no consensus on the number of phenotypes. Many of the definitions consist of counting fulfilled criteria on markers and do not include......, although this is often desired. I have proposed a new method for predicting class membership that, in contrast to methods based on posterior probabilities of class membership, yields consistent estimates when regressed on explanatory variables in a subsequent analysis. There are four different basic models...... within latent variable models: factor analysis, latent class analysis, latent profile analysis and latent trait analysis. I have given a general overview of how to predict scores of latent variables so these can be used in subsequent regression models. Two different principles of predicting scores...

6. Plasticity models of material variability based on uncertainty quantification techniques

Energy Technology Data Exchange (ETDEWEB)

Jones, Reese E. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Rizzi, Francesco [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Boyce, Brad [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Templeton, Jeremy Alan [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Ostien, Jakob [Sandia National Lab. (SNL-CA), Livermore, CA (United States)

2017-11-01

The advent of fabrication techniques like additive manufacturing has focused attention on the considerable variability of material response due to defects and other micro-structural aspects. This variability motivates the development of an enhanced design methodology that incorporates inherent material variability to provide robust predictions of performance. In this work, we develop plasticity models capable of representing the distribution of mechanical responses observed in experiments using traditional plasticity models of the mean response and recently developed uncertainty quantification (UQ) techniques. Lastly, we demonstrate that the new method provides predictive realizations that are superior to more traditional ones, and how these UQ techniques can be used in model selection and assessing the quality of calibrated physical parameters.

7. A new modeling approach to define marine ecosystems food-web status with uncertainty assessment

Science.gov (United States)

Chaalali, Aurélie; Saint-Béat, Blanche; Lassalle, Géraldine; Le Loc'h, François; Tecchio, Samuele; Safi, Georges; Savenkoff, Claude; Lobry, Jérémy; Niquil, Nathalie

2015-06-01

Ecosystem models are currently one of the most powerful approaches used to project and analyse the consequences of anthropogenic and climate-driven changes in food web structure and function. The modeling community is however still finding the effective representation of microbial processes as challenging and lacks of techniques for assessing flow uncertainty explicitly. A linear inverse model of the Bay of Biscay continental shelf was built using a Monte Carlo method coupled with a Markov Chain (LIM-MCMC) to characterize the system's trophic food-web status and its associated structural and functional properties. By taking into account the natural variability of ecosystems (and their associated flows) and the lack of data on these environments, this innovative approach enabled the quantification of uncertainties for both estimated flows and derived food-web indices. This uncertainty assessment constituted a real improvement on the existing Ecopath model for the same area and both models results were compared. Our results suggested a food web characterized by main flows at the basis of the food web and a high contribution of primary producers and detritus to the entire system input flows. The developmental stage of the ecosystem was characterized using estimated Ecological Network Analysis (ENA) indices; the LIM-MCMC produced a higher estimate of flow specialization (than the estimate from Ecopath) owing to better consideration of bacterial processes. The results also pointed to a detritus-based food-web with a web-like structure and an intermediate level of internal flow complexity, confirming the results of previous studies. Other current research on ecosystem model comparability is also presented.

8. The Properties of Model Selection when Retaining Theory Variables

DEFF Research Database (Denmark)

Hendry, David F.; Johansen, Søren

Economic theories are often fitted directly to data to avoid possible model selection biases. We show that embedding a theory model that specifies the correct set of m relevant exogenous variables, x{t}, within the larger set of m+k candidate variables, (x{t},w{t}), then selection over the second...... set by their statistical significance can be undertaken without affecting the estimator distribution of the theory parameters. This strategy returns the theory-parameter estimates when the theory is correct, yet protects against the theory being under-specified because some w{t} are relevant....

9. Modeling of Fluctuating Mass Flux in Variable Density Flows

Science.gov (United States)

So, R. M. C.; Mongia, H. C.; Nikjooy, M.

1983-01-01

The approach solves for both Reynolds and Favre averaged quantities and calculates the scalar pdf. Turbulent models used to close the governing equations are formulated to account for complex mixing and variable density effects. In addition, turbulent mass diffusivities are not assumed to be in constant proportion to turbulent momentum diffusivities. The governing equations are solved by a combination of finite-difference technique and Monte-Carlo simulation. Some preliminary results on simple variable density shear flows are presented. The differences between these results and those obtained using conventional models are discussed.

10. SST Diurnal Variability: Regional Extent & Implications in Atmospheric Modelling

DEFF Research Database (Denmark)

Karagali, Ioanna; Høyer, Jacob L.

2013-01-01

The project Sea Surface Temperature Diurnal Variability: Regional Extent and Implications in Atmospheric Modeling (SSTDV: R.EX.- IM.A.M.) was initiated within the framework of the European Space Agency's Support to Science Element (ESA STSE). The main focus is twofold: i) to characterize...... and quantify regional diurnal warming from the experimental MSG/SEVIRI hourly SST fields, for the period 2006-2012. ii) To investigate the impact of the increased SST temporal resolution in the atmospheric model WRF, in terms of modeled 10-m winds and surface heat fluxes. Withing this context, 3 main tasks...... SST variability on atmospheric modeling is the prime goal of the third and final task. This will be examined by increasing the temporal resolution of the SST initial conditions in WRF and by evaluating the WRF included diurnal scheme. Validation of the modeled winds will be performed against 10m ASAR...

11. An interdisciplinary swat ecohydrological model to define catchment-scale hydrologic partitioning

Science.gov (United States)

Shope, C. L.; Maharjan, G. R.; Tenhunen, J.; Seo, B.; Kim, K.; Riley, J.; Arnhold, S.; Koellner, T.; Ok, Y. S.; Peiffer, S.; Kim, B.; Park, J.-H.; Huwe, B.

2013-06-01

Land use and climate change have long been implicated in modifying ecosystem services, such as water quality and water yield, biodiversity, and agricultural production. To account for future effects on ecosystem services, the integration of physical, biological, economic, and social data over several scales must be implemented to assess the effects on natural resource availability and use. Our objective is to assess the capability of the SWAT model to capture short-duration monsoonal rainfall-runoff processes in complex mountainous terrain under rapid, event-driven processes in a monsoonal environment. To accomplish this, we developed a unique quality-control gap-filling algorithm for interpolation of high frequency meteorological data. We used a novel multi-location, multi-optimization calibration technique to improve estimations of catchment-wide hydrologic partitioning. We calibrated the interdisciplinary model to a combination of statistical, hydrologic, and plant growth metrics. In addition, we used multiple locations of different drainage area, aspect, elevation, and geologic substrata distributed throughout the catchment. Results indicate scale-dependent sensitivity of hydrologic partitioning and substantial influence of engineered features. While our model accurately reproduced observed discharge variability, the addition of hydrologic and plant growth objective functions identified the importance of culverts in catchment-wide flow distribution. The results of this study provide a valuable resource to describe landscape controls and their implication on discharge, sediment transport, and nutrient loading. This study also shows the challenges of applying the SWAT model to complex terrain and extreme environments. By incorporating anthropogenic features into modeling scenarios, we can greatly enhance our understanding of the hydroecological impacts on ecosystem services.

12. Analytical model of reactive transport processes with spatially variable coefficients.

Science.gov (United States)

Simpson, Matthew J; Morrow, Liam C

2015-05-01

Analytical solutions of partial differential equation (PDE) models describing reactive transport phenomena in saturated porous media are often used as screening tools to provide insight into contaminant fate and transport processes. While many practical modelling scenarios involve spatially variable coefficients, such as spatially variable flow velocity, v(x), or spatially variable decay rate, k(x), most analytical models deal with constant coefficients. Here we present a framework for constructing exact solutions of PDE models of reactive transport. Our approach is relevant for advection-dominant problems, and is based on a regular perturbation technique. We present a description of the solution technique for a range of one-dimensional scenarios involving constant and variable coefficients, and we show that the solutions compare well with numerical approximations. Our general approach applies to a range of initial conditions and various forms of v(x) and k(x). Instead of simply documenting specific solutions for particular cases, we present a symbolic worksheet, as supplementary material, which enables the solution to be evaluated for different choices of the initial condition, v(x) and k(x). We also discuss how the technique generalizes to apply to models of coupled multispecies reactive transport as well as higher dimensional problems.

13. A study to define and verify a model of interactive-constructive elementary school science teaching

Science.gov (United States)

Henriques, Laura

This study took place within a four year systemic reform effort collaboratively undertaken by the Science Education Center at the University of Iowa and a local school district. Key features of the inservice project included the use of children's literature as a springboard into inquiry based science investigations, activities to increase parents' involvement in children's science learning and extensive inservice opportunities for elementary teachers to increase content knowledge and content-pedagogical knowledge. The overarching goal of this elementary science teacher enhancement project was to move teachers towards an interactive-constructivist model of teaching and learning. This study had three components. The first was the definition of the prototype teacher indicated by the project's goals and supported by science education research. The second involved the generation of a model to show relationships between teacher-generated products, demographics and their subsequent teaching behaviors. The third involved the verification of the hypothesized model using data collected on 15 original participants. Demographic information, survey responses, interview and written responses to scenarios were among the data collected as source variables. These were scored using a rubric designed to measure constructivist practices in science teaching. Videotapes of science teaching and revised science curricula were collected as downstream variables and scored using an the ESTEEM observational rubric and a rubric developed for the project. Results indicate that newer teachers were more likely to implement features of the project. Those teachers who were philosophically aligned with project goals before project involvement were also more likely to implement features of the project. Other associations between reported beliefs, planning and classroom implementations were not confirmed by these data. Data show that teachers reported higher levels of implementation than their

14. Modeling heart rate variability including the effect of sleep stages

Science.gov (United States)

Soliński, Mateusz; Gierałtowski, Jan; Żebrowski, Jan

2016-02-01

We propose a model for heart rate variability (HRV) of a healthy individual during sleep with the assumption that the heart rate variability is predominantly a random process. Autonomic nervous system activity has different properties during different sleep stages, and this affects many physiological systems including the cardiovascular system. Different properties of HRV can be observed during each particular sleep stage. We believe that taking into account the sleep architecture is crucial for modeling the human nighttime HRV. The stochastic model of HRV introduced by Kantelhardt et al. was used as the initial starting point. We studied the statistical properties of sleep in healthy adults, analyzing 30 polysomnographic recordings, which provided realistic information about sleep architecture. Next, we generated synthetic hypnograms and included them in the modeling of nighttime RR interval series. The results of standard HRV linear analysis and of nonlinear analysis (Shannon entropy, Poincaré plots, and multiscale multifractal analysis) show that—in comparison with real data—the HRV signals obtained from our model have very similar properties, in particular including the multifractal characteristics at different time scales. The model described in this paper is discussed in the context of normal sleep. However, its construction is such that it should allow to model heart rate variability in sleep disorders. This possibility is briefly discussed.

15. Modeling mud flocculation using variable collision and breakup efficiencies

Science.gov (United States)

Strom, K.; Keyvani, A.

2013-12-01

Solution of the Winterwerp (1998) floc growth and breakup equation yields time dependent median floc size as an outcome of collision driven floc growth and shear induced floc breakage. The formulation is quite nice in that it is an ODE that yields fast solution for median floc size and can be incorporated into sediment transport models. The Winterwerp (1998) floc size equation was used to model floc growth and breakup data from laboratory experiments conducted under both constant and variable turbulent shear rate (Keyvani 2013). The data showed that floc growth rate starts out very high and then reduces with size to asymptotically approach an equilibrium size. In modeling the data, the Winterwerp (1998) model and the Son and Hsu (2008) variant were found to be able to capture the initial fast growth phase and the equilibrium state, but were not able to well capture the slow growing phase. This resulted in flocs reaching the equilibrium state in the models much faster than the experimental data. The objective of this work was to improve the ability of the general Winterwerp (1998) formulation to better capture the slow growth phase and more accurately predict the time to equilibrium. To do this, a full parameter sensitivity analysis was conducted using the Winterwerp (1998) model. Several modifications were tested, including the variable fractal dimension and yield strength extensions of Son and Hsu (2008, 2009). The best match with the in-house data, and data from the literature, was achieved using floc collision and breakup efficiency coefficients that decrease with floc size. The net result of the decrease in both of these coefficients is that floc growth slows without modification to the equilibrium size. Inclusion of these new functions allows for substantial improvement in modeling the growth phase of flocs in both steady and variable turbulence conditions. The improvement is particularly noticeable when modeling continual growth in a decaying turbulence field

16. Efficient family-based model checking via variability abstractions

DEFF Research Database (Denmark)

Dimovski, Aleksandar; Al-Sibahi, Ahmad Salim; Brabrand, Claus

2016-01-01

Many software systems are variational: they can be configured to meet diverse sets of requirements. They can produce a (potentially huge) number of related systems, known as products or variants, by systematically reusing common parts. For variational models (variational systems or families...... with the abstract model checking of the concrete high-level variational model. This allows the use of Spin with all its accumulated optimizations for efficient verification of variational models without any knowledge about variability. We have implemented the transformations in a prototype tool, and we illustrate...

17. Internal variability in a regional climate model over West Africa

Energy Technology Data Exchange (ETDEWEB)

Vanvyve, Emilie; Ypersele, Jean-Pascal van [Universite catholique de Louvain, Institut d' astronomie et de geophysique Georges Lemaitre, Louvain-la-Neuve (Belgium); Hall, Nicholas [Laboratoire d' Etudes en Geophysique et Oceanographie Spatiales/Centre National d' Etudes Spatiales, Toulouse Cedex 9 (France); Messager, Christophe [University of Leeds, Institute for Atmospheric Science, Environment, School of Earth and Environment, Leeds (United Kingdom); Leroux, Stephanie [Universite Joseph Fourier, Laboratoire d' etude des Transferts en Hydrologie et Environnement, BP53, Grenoble Cedex 9 (France)

2008-02-15

Sensitivity studies with regional climate models are often performed on the basis of a few simulations for which the difference is analysed and the statistical significance is often taken for granted. In this study we present some simple measures of the confidence limits for these types of experiments by analysing the internal variability of a regional climate model run over West Africa. Two 1-year long simulations, differing only in their initial conditions, are compared. The difference between the two runs gives a measure of the internal variability of the model and an indication of which timescales are reliable for analysis. The results are analysed for a range of timescales and spatial scales, and quantitative measures of the confidence limits for regional model simulations are diagnosed for a selection of study areas for rainfall, low level temperature and wind. As the averaging period or spatial scale is increased, the signal due to internal variability gets smaller and confidence in the simulations increases. This occurs more rapidly for variations in precipitation, which appear essentially random, than for dynamical variables, which show some organisation on larger scales. (orig.)

18. Viscous cosmological models with a variable cosmological term ...

African Journals Online (AJOL)

Einstein's field equations for a Friedmann-Lamaitre Robertson-Walker universe filled with a dissipative fluid with a variable cosmological term L described by full Israel-Stewart theory are considered. General solutions to the field equations for the flat case have been obtained. The solution corresponds to the dust free model ...

19. Appraisal and Reliability of Variable Engagement Model Prediction ...

African Journals Online (AJOL)

The variable engagement model based on the stress - crack opening displacement relationship and, which describes the behaviour of randomly oriented steel fibres composite subjected to uniaxial tension has been evaluated so as to determine the safety indices associated when the fibres are subjected to pullout and with ...

20. Rose bush leaf and internode expansion dynamics: analysis and development of a model capturing interplant variability

Directory of Open Access Journals (Sweden)

Sabine eDemotes-Mainard

2013-10-01

Full Text Available Bush rose architecture, among other factors, such as plant health, determines plant visual quality. The commercial product is the individual plant and interplant variability may be high within a crop. Thus, both mean plant architecture and interplant variability should be studied. Expansion is an important feature of architecture, but it has been little studied at the level of individual organs in bush roses. We investigated the expansion kinetics of primary shoot organs, to develop a model reproducing the organ expansion of real crops from non destructive input variables. We took interplant variability in expansion kinetics and the model’s ability to simulate this variability into account. Changes in leaflet and internode dimensions over thermal time were recorded for primary shoot expansion, on 83 plants from three crops grown in different climatic conditions and densities. An empirical model was developed, to reproduce organ expansion kinetics for individual plants of a real crop of bush rose primary shoots. Leaflet or internode length was simulated as a logistic function of thermal time. The model was evaluated by cross-validation. We found that differences in leaflet or internode expansion kinetics between phytomer positions and between plants at a given phytomer position were due mostly to large differences in time of organ expansion and expansion rate, rather than differences in expansion duration. Thus, in the model, the parameters linked to expansion duration were predicted by values common to all plants, whereas variability in final size and organ expansion time was captured by input data. The model accurately simulated leaflet and internode expansion for individual plants (RMSEP = 7.3% and 10.2% of final length, respectively. Thus, this study defines the measurements required to simulate expansion and provides the first model simulating organ expansion in rosebush to capture interplant variability.

1. A metric for attributing variability in modelled streamflows

Science.gov (United States)

Shoaib, Syed Abu; Marshall, Lucy; Sharma, Ashish

2016-10-01

Significant gaps in our present understanding of hydrological systems lead to enhanced uncertainty in key modelling decisions. This study proposes a method, namely ;Quantile Flow Deviation (QFD);, for the attribution of forecast variability to different sources across different streamflow regimes. By using a quantile based metric, we can assess the change in uncertainty across individual percentiles, thereby allowing uncertainty to be expressed as a function of magnitude and time. As a result, one can address selective sources of uncertainty depending on whether low or high flows (say) are of interest. By way of a case study, we demonstrate the usefulness of the approach for estimating the relative importance of model parameter identification, objective functions and model structures as sources of stream flow forecast uncertainty. We use FUSE (Framework for Understanding Structural Errors) to implement our methods, allowing selection of multiple different model structures. Cross-catchment comparison is done for two different catchments: Leaf River in Mississippi, USA and Bass River of Victoria, Australia. Two different approaches to parameter estimation are presented that demonstrate the statistic- one based on GLUE, the other one based on optimization. The results presented in this study suggest that the determination of the model structure with the design catchment should be given priority but that objective function selection with parameter identifiability can lead to significant variability in results. By examining the QFD across multiple flow quantiles, the ability of certain models and optimization routines to constrain variability for different flow conditions is demonstrated.

2. Defining Essential Biodiversity Variables (EBVs) as a contribution to Essential Ocean Variables (EOVs): A Core Task of the Marine Biodiversity Observation Network (MBON) to Accelerate Integration of Biological Observations in the Global Ocean Observing System (GOOS)

Science.gov (United States)

Pearlman, J.; Muller-Karger, F. E.; Sousa Pinto, I.; Costello, M. J.; Duffy, J. E.; Appeltans, W.; Fischer, A. S.; Canonico, G.; Klein, E.; Obura, D.; Montes, E.; Miloslavich, P.; Howard, M.

2017-12-01

The Marine Biodiversity Observation Network (MBON) is a networking effort under the umbrella of the Group on Earth Observations Biodiversity Observation Network (GEO BON). The objective of the MBON is to link existing groups engaged in ocean observation and help define practical indices to deploy in an operational manner to track changes in the number of marine species, the abundance and biomass of marine organisms, the diverse interactions between organisms and the environment, and the variability and change of specific habitats of interest. MBON serves as the biodiversity arm of Blue Planet, the initiative of the Group on Earth Observations (GEO) for the benefit of society. The Global Ocean Observing System (GOOS) was established under the auspices of the Intergovernmental Oceanographic Commission (IOC) in 1991 to organize international ocean observing efforts. The mission of the GOOS is to support monitoring to improve the management of marine and coastal ecosystems and resources, and to enable scientific research. GOOS is engaged in a continuing, rigorous process of identifying Essential Ocean Variables (EOVs). MBON is working with GOOS and the Ocean Biogeographic Information System (OBIS, also under the IOC) to define Essential Biodiversity Variables (EBVs) as those Essential Ocean Variables (EOVs) that have explicit taxonomic records associated with them. For practical purposes, EBVs are a subset of the EOVs. The focus is to promote the integration of biological EOVs including EBVs into the existing and planned national and international ocean observing systems. The definition avoids a proliferation of 'essential' variables across multiple organizations. MBON will continue to advance practical and wide use of EBVs and related EOV. This is an effective way to contribute to several UN assessments (e.g., from IPBES, IPCC, and the World Ocean Assessment under the UN Regular Process), UN Sustainable Development Goals, and to address targets and goals defined under

3. Sparse modeling of spatial environmental variables associated with asthma.

Science.gov (United States)

Chang, Timothy S; Gangnon, Ronald E; David Page, C; Buckingham, William R; Tandias, Aman; Cowan, Kelly J; Tomasallo, Carrie D; Arndt, Brian G; Hanrahan, Lawrence P; Guilbert, Theresa W

2015-02-01

Geographically distributed environmental factors influence the burden of diseases such as asthma. Our objective was to identify sparse environmental variables associated with asthma diagnosis gathered from a large electronic health record (EHR) dataset while controlling for spatial variation. An EHR dataset from the University of Wisconsin's Family Medicine, Internal Medicine and Pediatrics Departments was obtained for 199,220 patients aged 5-50years over a three-year period. Each patient's home address was geocoded to one of 3456 geographic census block groups. Over one thousand block group variables were obtained from a commercial database. We developed a Sparse Spatial Environmental Analysis (SASEA). Using this method, the environmental variables were first dimensionally reduced with sparse principal component analysis. Logistic thin plate regression spline modeling was then used to identify block group variables associated with asthma from sparse principal components. The addresses of patients from the EHR dataset were distributed throughout the majority of Wisconsin's geography. Logistic thin plate regression spline modeling captured spatial variation of asthma. Four sparse principal components identified via model selection consisted of food at home, dog ownership, household size, and disposable income variables. In rural areas, dog ownership and renter occupied housing units from significant sparse principal components were associated with asthma. Our main contribution is the incorporation of sparsity in spatial modeling. SASEA sequentially added sparse principal components to Logistic thin plate regression spline modeling. This method allowed association of geographically distributed environmental factors with asthma using EHR and environmental datasets. SASEA can be applied to other diseases with environmental risk factors. Copyright © 2014 Elsevier Inc. All rights reserved.

4. Analysis models for variables associated with breastfeeding duration

Directory of Open Access Journals (Sweden)

Edson Theodoro dos S. Neto

2013-09-01

Full Text Available OBJECTIVE To analyze the factors associated with breastfeeding duration by two statistical models. METHODS A population-based cohort study was conducted with 86 mothers and newborns from two areas primary covered by the National Health System, with high rates of infant mortality in Vitória, Espírito Santo, Brazil. During 30 months, 67 (78% children and mothers were visited seven times at home by trained interviewers, who filled out survey forms. Data on food and sucking habits, socioeconomic and maternal characteristics were collected. Variables were analyzed by Cox regression models, considering duration of breastfeeding as the dependent variable, and logistic regression (dependent variables, was the presence of a breastfeeding child in different post-natal ages. RESULTS In the logistic regression model, the pacifier sucking (adjusted Odds Ratio: 3.4; 95%CI 1.2-9.55 and bottle feeding (adjusted Odds Ratio: 4.4; 95%CI 1.6-12.1 increased the chance of weaning a child before one year of age. Variables associated to breastfeeding duration in the Cox regression model were: pacifier sucking (adjusted Hazard Ratio 2.0; 95%CI 1.2-3.3 and bottle feeding (adjusted Hazard Ratio 2.0; 95%CI 1.2-3.5. However, protective factors (maternal age and family income differed between both models. CONCLUSIONS Risk and protective factors associated with cessation of breastfeeding may be analyzed by different models of statistical regression. Cox Regression Models are adequate to analyze such factors in longitudinal studies.

5. Multiple Discrete Endogenous Variables in Weakly-Separable Triangular Models

Directory of Open Access Journals (Sweden)

Sung Jae Jun

2016-02-01

Full Text Available We consider a model in which an outcome depends on two discrete treatment variables, where one treatment is given before the other. We formulate a three-equation triangular system with weak separability conditions. Without assuming assignment is random, we establish the identification of an average structural function using two-step matching. We also consider decomposing the effect of the first treatment into direct and indirect effects, which are shown to be identified by the proposed methodology. We allow for both of the treatment variables to be non-binary and do not appeal to an identification-at-infinity argument.

6. Quantum ring models and action-angle variables

OpenAIRE

Bellucci, Stefano; Nersessian, Armen; Saghatelian, Armen; Yeghikyan, Vahagn

2010-01-01

We suggest to use the action-angle variables for the study of properties of (quasi)particles in quantum rings. For this purpose we present the action-angle variables for three two-dimensional singular oscillator systems. The first one is the usual (Euclidean) singular oscillator, which plays the role of the confinement potential for the quantum ring. We also propose two singular spherical oscillator models for the role of the confinement system for the spherical ring. The first one is based o...

7. Ensembling Variable Selectors by Stability Selection for the Cox Model

Directory of Open Access Journals (Sweden)

Qing-Yan Yin

2017-01-01

Full Text Available As a pivotal tool to build interpretive models, variable selection plays an increasingly important role in high-dimensional data analysis. In recent years, variable selection ensembles (VSEs have gained much interest due to their many advantages. Stability selection (Meinshausen and Bühlmann, 2010, a VSE technique based on subsampling in combination with a base algorithm like lasso, is an effective method to control false discovery rate (FDR and to improve selection accuracy in linear regression models. By adopting lasso as a base learner, we attempt to extend stability selection to handle variable selection problems in a Cox model. According to our experience, it is crucial to set the regularization region Λ in lasso and the parameter λmin properly so that stability selection can work well. To the best of our knowledge, however, there is no literature addressing this problem in an explicit way. Therefore, we first provide a detailed procedure to specify Λ and λmin. Then, some simulated and real-world data with various censoring rates are used to examine how well stability selection performs. It is also compared with several other variable selection approaches. Experimental results demonstrate that it achieves better or competitive performance in comparison with several other popular techniques.

8. A Review of Variable Slicing in Fused Deposition Modeling

Science.gov (United States)

2017-06-01

The paper presents a literature survey in the field of fused deposition of plastic wires especially in the field of slicing and deposition using extrusion of thermoplastic wires. Various researchers working in the field of computation of deposition path have used their algorithms for variable slicing. In the study, a flowchart has also been proposed for the slicing and deposition process. The algorithm already been developed by previous researcher will be used to be implemented on the fused deposition modelling machine. To demonstrate the capabilities of the fused deposition modeling machine a case study has been taken. It uses a manipulated G-code to be fed to the fused deposition modeling machine. Two types of slicing strategies, namely uniform slicing and variable slicing have been evaluated. In the uniform slicing, the slice thickness has been used for deposition is varying from 0.1 to 0.4 mm. In the variable slicing, thickness has been varied from 0.1 in the polar region to 0.4 in the equatorial region Time required and the number of slices required to deposit a hemisphere of 20 mm diameter have been compared with that using the variable slicing.

9. Hidden Markov latent variable models with multivariate longitudinal data.

Science.gov (United States)

Song, Xinyuan; Xia, Yemao; Zhu, Hongtu

2017-03-01

Cocaine addiction is chronic and persistent, and has become a major social and health problem in many countries. Existing studies have shown that cocaine addicts often undergo episodic periods of addiction to, moderate dependence on, or swearing off cocaine. Given its reversible feature, cocaine use can be formulated as a stochastic process that transits from one state to another, while the impacts of various factors, such as treatment received and individuals' psychological problems on cocaine use, may vary across states. This article develops a hidden Markov latent variable model to study multivariate longitudinal data concerning cocaine use from a California Civil Addict Program. The proposed model generalizes conventional latent variable models to allow bidirectional transition between cocaine-addiction states and conventional hidden Markov models to allow latent variables and their dynamic interrelationship. We develop a maximum-likelihood approach, along with a Monte Carlo expectation conditional maximization (MCECM) algorithm, to conduct parameter estimation. The asymptotic properties of the parameter estimates and statistics for testing the heterogeneity of model parameters are investigated. The finite sample performance of the proposed methodology is demonstrated by simulation studies. The application to cocaine use study provides insights into the prevention of cocaine use. © 2016, The International Biometric Society.

10. Relevance units latent variable model and nonlinear dimensionality reduction.

Science.gov (United States)

Gao, Junbin; Zhang, Jun; Tien, David

2010-01-01

A new dimensionality reduction method, called relevance units latent variable model (RULVM), is proposed in this paper. RULVM has a close link with the framework of Gaussian process latent variable model (GPLVM) and it originates from a recently developed sparse kernel model called relevance units machine (RUM). RUM follows the idea of relevance vector machine (RVM) under the Bayesian framework but releases the constraint that relevance vectors (RVs) have to be selected from the input vectors. RUM treats relevance units (RUs) as part of the parameters to be learned from the data. As a result, a RUM maintains all the advantages of RVM and offers superior sparsity. RULVM inherits the advantages of sparseness offered by the RUM and the experimental result shows that RULVM algorithm possesses considerable computational advantages over GPLVM algorithm.

11. Testing concordance of instrumental variable effects in generalized linear models with application to Mendelian randomization

Science.gov (United States)

Dai, James Y.; Chan, Kwun Chuen Gary; Hsu, Li

2014-01-01

Instrumental variable regression is one way to overcome unmeasured confounding and estimate causal effect in observational studies. Built on structural mean models, there has been considerale work recently developed for consistent estimation of causal relative risk and causal odds ratio. Such models can sometimes suffer from identification issues for weak instruments. This hampered the applicability of Mendelian randomization analysis in genetic epidemiology. When there are multiple genetic variants available as instrumental variables, and causal effect is defined in a generalized linear model in the presence of unmeasured confounders, we propose to test concordance between instrumental variable effects on the intermediate exposure and instrumental variable effects on the disease outcome, as a means to test the causal effect. We show that a class of generalized least squares estimators provide valid and consistent tests of causality. For causal effect of a continuous exposure on a dichotomous outcome in logistic models, the proposed estimators are shown to be asymptotically conservative. When the disease outcome is rare, such estimators are consistent due to the log-linear approximation of the logistic function. Optimality of such estimators relative to the well-known two-stage least squares estimator and the double-logistic structural mean model is further discussed. PMID:24863158

12. Bayesian Variable Selection on Model Spaces Constrained by Heredity Conditions.

Science.gov (United States)

Taylor-Rodriguez, Daniel; Womack, Andrew; Bliznyuk, Nikolay

2016-01-01

This paper investigates Bayesian variable selection when there is a hierarchical dependence structure on the inclusion of predictors in the model. In particular, we study the type of dependence found in polynomial response surfaces of orders two and higher, whose model spaces are required to satisfy weak or strong heredity conditions. These conditions restrict the inclusion of higher-order terms depending upon the inclusion of lower-order parent terms. We develop classes of priors on the model space, investigate their theoretical and finite sample properties, and provide a Metropolis-Hastings algorithm for searching the space of models. The tools proposed allow fast and thorough exploration of model spaces that account for hierarchical polynomial structure in the predictors and provide control of the inclusion of false positives in high posterior probability models.

13. [Interaction between continuous variables in logistic regression model].

Science.gov (United States)

Qiu, Hong; Yu, Ignatius Tak-Sun; Tse, Lap Ah; Wang, Xiao-rong; Fu, Zhen-ming

2010-07-01

Rothman argued that interaction estimated as departure from additivity better reflected the biological interaction. In a logistic regression model, the product term reflects the interaction as departure from multiplicativity. So far, literature on estimating interaction regarding an additive scale using logistic regression was only focusing on two dichotomous factors. The objective of the present report was to provide a method to examine the interaction as departure from additivity between two continuous variables or between one continuous variable and one categorical variable. We used data from a lung cancer case-control study among males in Hong Kong as an example to illustrate the bootstrap re-sampling method for calculating the corresponding confidence intervals. Free software R (Version 2.8.1) was used to estimate interaction on the additive scale.

14. Habitat suitability index model for brook trout in streams of the Southern Blue Ridge Province: surrogate variables, model evaluation, and suggested improvements

Science.gov (United States)

Christoper J. Schmitt; A. Dennis Lemly; Parley V. Winger

1993-01-01

Data from several sources were collated and analyzed by correlation, regression, and principal components analysis to define surrrogate variables for use in the brook trout (Salvelinus fontinalis) habitat suitability index (HSI) model, and to evaluate the applicability of the model for assessing habitat in high elevation streams of the southern Blue Ridge Province (...

15. Modeling temporal and spatial variability of crop yield

Science.gov (United States)

Bonetti, S.; Manoli, G.; Scudiero, E.; Morari, F.; Putti, M.; Teatini, P.

2014-12-01

In a world of increasing food insecurity the development of modeling tools capable of supporting on-farm decision making processes is highly needed to formulate sustainable irrigation practices in order to preserve water resources while maintaining adequate crop yield. The design of these practices starts from the accurate modeling of soil-plant-atmosphere interaction. We present an innovative 3D Soil-Plant model that couples 3D hydrological soil dynamics with a mechanistic description of plant transpiration and photosynthesis, including a crop growth module. Because of its intrinsically three dimensional nature, the model is able to capture spatial and temporal patterns of crop yield over large scales and under various climate and environmental factors. The model is applied to a 25 ha corn field in the Venice coastland, Italy, that has been continuously monitored over the years 2010 and 2012 in terms of both hydrological dynamics and yield mapping. The model results satisfactorily reproduce the large variability observed in maize yield (from 2 to 15 ton/ha). This variability is shown to be connected to the spatial heterogeneities of the farmland, which is characterized by several sandy paleo-channels crossing organic-rich silty soils. Salt contamination of soils and groundwater in a large portion of the area strongly affects the crop yield, especially outside the paleo-channels, where measured salt concentrations are lower than the surroundings. The developed model includes a simplified description of the effects of salt concentration in soil water on transpiration. The results seem to capture accurately the effects of salt concentration and the variability of the climatic conditions occurred during the three years of measurements. This innovative modeling framework paves the way to future large scale simulations of farmland dynamics.

16. An Atmospheric Variability Model for Venus Aerobraking Missions

Science.gov (United States)

Tolson, Robert T.; Prince, Jill L. H.; Konopliv, Alexander A.

2013-01-01

Aerobraking has proven to be an enabling technology for planetary missions to Mars and has been proposed to enable low cost missions to Venus. Aerobraking saves a significant amount of propulsion fuel mass by exploiting atmospheric drag to reduce the eccentricity of the initial orbit. The solar arrays have been used as the primary drag surface and only minor modifications have been made in the vehicle design to accommodate the relatively modest aerothermal loads. However, if atmospheric density is highly variable from orbit to orbit, the mission must either accept higher aerothermal risk, a slower pace for aerobraking, or a tighter corridor likely with increased propulsive cost. Hence, knowledge of atmospheric variability is of great interest for the design of aerobraking missions. The first planetary aerobraking was at Venus during the Magellan mission. After the primary Magellan science mission was completed, aerobraking was used to provide a more circular orbit to enhance gravity field recovery. Magellan aerobraking took place between local solar times of 1100 and 1800 hrs, and it was found that the Venusian atmospheric density during the aerobraking phase had less than 10% 1 sigma orbit to orbit variability. On the other hand, at some latitudes and seasons, Martian variability can be as high as 40% 1 sigmaFrom both the MGN and PVO mission it was known that the atmosphere, above aerobraking altitudes, showed greater variability at night, but this variability was never quantified in a systematic manner. This paper proposes a model for atmospheric variability that can be used for aerobraking mission design until more complete data sets become available.

17. Multiscale thermohydrologic model: addressing variability and uncertainty at Yucca Mountain

International Nuclear Information System (INIS)

Buscheck, T; Rosenberg, N D; Gansemer, J D; Sun, Y

2000-01-01

Performance assessment and design evaluation require a modeling tool that simultaneously accounts for processes occurring at a scale of a few tens of centimeters around individual waste packages and emplacement drifts, and also on behavior at the scale of the mountain. Many processes and features must be considered, including non-isothermal, multiphase-flow in rock of variable saturation and thermal radiation in open cavities. Also, given the nature of the fractured rock at Yucca Mountain, a dual-permeability approach is needed to represent permeability. A monolithic numerical model with all these features requires too large a computational cost to be an effective simulation tool, one that is used to examine sensitivity to key model assumptions and parameters. We have developed a multi-scale modeling approach that effectively simulates 3D discrete-heat-source, mountain-scale thermohydrologic behavior at Yucca Mountain and captures the natural variability of the site consistent with what we know from site characterization and waste-package-to-waste-package variability in heat output. We describe this approach and present results examining the role of infiltration flux, the most important natural-system parameter with respect to how thermohydrologic behavior influences the performance of the repository

18. A new approach for modelling variability in residential construction projects

Directory of Open Access Journals (Sweden)

2013-06-01

Full Text Available The construction industry is plagued by long cycle times caused by variability in the supply chain. Variations or undesirable situations are the result of factors such as non-standard practices, work site accidents, inclement weather conditions and faults in design. This paper uses a new approach for modelling variability in construction by linking relative variability indicators to processes. Mass homebuilding sector was chosen as the scope of the analysis because data is readily available. Numerous simulation experiments were designed by varying size of capacity buffers in front of trade contractors, availability of trade contractors, and level of variability in homebuilding processes. The measurements were shown to lead to an accurate determination of relationships between these factors and production parameters. The variability indicator was found to dramatically affect the tangible performance measures such as home completion rates. This study provides for future analysis of the production homebuilding sector, which may lead to improvements in performance and a faster product delivery to homebuyers.

19. A new approach for modelling variability in residential construction projects

Directory of Open Access Journals (Sweden)

2013-06-01

Full Text Available The construction industry is plagued by long cycle times caused by variability in the supply chain. Variations or undesirable situations are the result of factors such as non-standard practices, work site accidents, inclement weather conditions and faults in design. This paper uses a new approach for modelling variability in construction by linking relative variability indicators to processes. Mass homebuilding sector was chosen as the scope of the analysis because data is readily available. Numerous simulation experiments were designed by varying size of capacity buffers in front of trade contractors, availability of trade contractors, and level of variability in homebuilding processes. The measurements were shown to lead to an accurate determination of relationships between these factors and production parameters. The variability indicator was found to dramatically affect the tangible performance measures such as home completion rates. This study provides for future analysis of the production homebuilding sector, which may lead to improvements in performance and a faster product delivery to homebuyers.

20. Animal models of physiologic markers of male reproduction: genetically defined infertile mice

Energy Technology Data Exchange (ETDEWEB)

Chubb, C.

1987-10-01

The present report focuses on novel animal models of male infertility: genetically defined mice bearing single-gene mutations that induce infertility. The primary goal of the investigations was to identify the reproductive defects in these mutant mice. The phenotypic effects of the gene mutations were deciphered by comparing the mutant mice to their normal siblings. Initially testicular steroidogenesis and spermatogenesis were investigated. The physiologic markers for testicular steroidogenesis were steroid secretion by testes perifused in vitro, seminal vesicle weight, and Leydig cell histology. Spermatogenesis was evaluated by the enumeration of homogenization-resistant sperm/spermatids in testes and by morphometric analyses of germ cells in the seminiferous epithelium. If testicular function appeared normal, the authors investigated the sexual behavior of the mice. The parameters of male sexual behavior that were quantified included mount patency, mount frequency, intromission latency, thrusts per intromission, ejaculation latency, and ejaculation duration. Females of pairs breeding under normal circumstances were monitored for the presence of vaginal plugs and pregnancies. The patency of the ejaculatory process was determined by quantifying sperm in the female reproductive tract after sexual behavior tests. Sperm function was studied by quantitatively determining sperm motility during videomicroscopic observation. Also, the ability of epididymal sperm to function within the uterine environment was analyzed by determining sperm capacity to initiate pregnancy after artificial insemination. Together, the experimental results permitted the grouping of the gene mutations into three general categories. They propose that the same biological markers used in the reported studies can be implemented in the assessment of the impact that environmental toxins may have on male reproduction.

1. Stochastic Actuarial Modelling of a Defined-Benefit Social Security Pension Scheme: An Analytical Approach

OpenAIRE

Iyer, Subramaniam

2017-01-01

Among the systems in place in different countries for the protection of the population against the long-term contingencies of old-age (or retirement), disability and death (or survivorship), defined-benefit social security pension schemes, i.e. social insurance pension schemes, by far predominate, despite the recent trend towards defined-contribution arrangements in social security reforms. Actuarial valuations of these schemes, unlike other branches of insurance, continue to be carried out a...

2. Classification criteria of syndromes by latent variable models

DEFF Research Database (Denmark)

Petersen, Janne

2010-01-01

The thesis has two parts; one clinical part: studying the dimensions of human immunodeficiency virus associated lipodystrophy syndrome (HALS) by latent class models, and a more statistical part: investigating how to predict scores of latent variables so these can be used in subsequent regression...... analyses. Part 1: HALS engages different phenotypic changes of peripheral lipoatrophy and central lipohypertrophy.  There are several different definitions of HALS and no consensus on the number of phenotypes. Many of the definitions consist of counting fulfilled criteria on markers and do not include...... patient's characteristics. These methods may erroneously reduce multiplicity either by combining markers of different phenotypes or by mixing HALS with other processes such as aging. Latent class models identify homogenous groups of patients based on sets of variables, for example symptoms. As no gold...

3. Explicit estimating equations for semiparametric generalized linear latent variable models

KAUST Repository

Ma, Yanyuan

2010-07-05

We study generalized linear latent variable models without requiring a distributional assumption of the latent variables. Using a geometric approach, we derive consistent semiparametric estimators. We demonstrate that these models have a property which is similar to that of a sufficient complete statistic, which enables us to simplify the estimating procedure and explicitly to formulate the semiparametric estimating equations. We further show that the explicit estimators have the usual root n consistency and asymptotic normality. We explain the computational implementation of our method and illustrate the numerical performance of the estimators in finite sample situations via extensive simulation studies. The advantage of our estimators over the existing likelihood approach is also shown via numerical comparison. We employ the method to analyse a real data example from economics. © 2010 Royal Statistical Society.

4. Variable fused deposition modelling - concept design and tool path generation

OpenAIRE

2011-01-01

Current Fused Deposition Modelling (FDM) techniques use fixed diameter nozzles to deposit a filament of plastic layer by layer. The consequence is that the same small nozzle, essential for fine details, is also used to fill in relatively large volumes. In practice a Pareto-optimal nozzle diameter is chosen that attempts to maximise resolution while minimising build time. This paper introduces a concept for adapting an additive manufacturing system, which exploits a variable diameter nozzle fo...

5. Binary system parameters and the hibernation model of cataclysmic variables

International Nuclear Information System (INIS)

Livio, M.; Shara, M.M.; Space Telescope Science Institute, Baltimore, MD)

1987-01-01

The hibernation model, in which nova systems spend most of the time between eruptions in a state of low mass transfer rate, is examined. The binary systems more likely to undergo hibernation are determined. The predictions of the hibernation scenario are shown to be consistent with available observational data. It is shown how the hibernation scenario provides links between classical novae, dwarf novae, and novalike variables, all of which represent different stages in the cyclic evolution of the same systems. 72 references

6. Latent variables and structural equation models for longitudinal relationships: an illustration in nutritional epidemiology

Directory of Open Access Journals (Sweden)

Basdevant Arnaud

2010-04-01

Full Text Available Abstract Background The use of structural equation modeling and latent variables remains uncommon in epidemiology despite its potential usefulness. The latter was illustrated by studying cross-sectional and longitudinal relationships between eating behavior and adiposity, using four different indicators of fat mass. Methods Using data from a longitudinal community-based study, we fitted structural equation models including two latent variables (respectively baseline adiposity and adiposity change after 2 years of follow-up, each being defined, by the four following anthropometric measurement (respectively by their changes: body mass index, waist circumference, skinfold thickness and percent body fat. Latent adiposity variables were hypothesized to depend on a cognitive restraint score, calculated from answers to an eating-behavior questionnaire (TFEQ-18, either cross-sectionally or longitudinally. Results We found that high baseline adiposity was associated with a 2-year increase of the cognitive restraint score and no convincing relationship between baseline cognitive restraint and 2-year adiposity change could be established. Conclusions The latent variable modeling approach enabled presentation of synthetic results rather than separate regression models and detailed analysis of the causal effects of interest. In the general population, restrained eating appears to be an adaptive response of subjects prone to gaining weight more than as a risk factor for fat-mass increase.

7. Computational Fluid Dynamics Modeling of a Supersonic Nozzle and Integration into a Variable Cycle Engine Model

Science.gov (United States)

Connolly, Joseph W.; Friedlander, David; Kopasakis, George

2015-01-01

This paper covers the development of an integrated nonlinear dynamic simulation for a variable cycle turbofan engine and nozzle that can be integrated with an overall vehicle Aero-Propulso-Servo-Elastic (APSE) model. A previously developed variable cycle turbofan engine model is used for this study and is enhanced here to include variable guide vanes allowing for operation across the supersonic flight regime. The primary focus of this study is to improve the fidelity of the model's thrust response by replacing the simple choked flow equation convergent-divergent nozzle model with a MacCormack method based quasi-1D model. The dynamic response of the nozzle model using the MacCormack method is verified by comparing it against a model of the nozzle using the conservation element/solution element method. A methodology is also presented for the integration of the MacCormack nozzle model with the variable cycle engine.

8. Models, measures, and methods: variability in aging research.

Science.gov (United States)

Miller, Edward Alan; Weissert, William G

2003-01-01

The purpose of this paper is to review the models and measurement strategies used in studies evaluating the predictors of nursing home placement, hospitalization, functional impairment and mortality. To do so we examine 167 multivariate equations abstracted from 78 longitudinal studies published between 1985 and 1998 that assess the risk factors of one or more adverse outcomes. We find that both comparatively straightforward concepts such as age and income and widely used scales such as activities of daily living and the short-portable mental status questionnaire display considerable variability in operationalization and coding. We also find that few researchers employ explicit conceptual models to assist with variable choice, while some predictors-demographics, physical and cognitive functioning-were studied much more frequently than others-service, market, and policy characteristics. Variability in measurement highlights the lack of standardization in this area of aging research and leaves room for improvements in validity and reliability. Limited use of conceptual models has led researchers to include some predictors in their analyses to the exclusion of others.

9. Can natural variability trigger effects on fish and fish habitat as defined in environment Canada's metal mining environmental effects monitoring program?

Science.gov (United States)

Mackey, Robin; Rees, Cassandra; Wells, Kelly; Pham, Samantha; England, Kent

2013-01-01

The Metal Mining Effluent Regulations (MMER) took effect in 2002 and require most metal mining operations in Canada to complete environmental effects monitoring (EEM) programs. An "effect" under the MMER EEM program is considered any positive or negative statistically significant difference in fish population, fish usability, or benthic invertebrate community EEM-defined endpoints. Two consecutive studies with the same statistically significant differences trigger more intensive monitoring, including the characterization of extent and magnitude and investigation of cause. Standard EEM study designs do not require multiple reference areas or preexposure sampling, thus results and conclusions about mine effects are highly contingent on the selection of a near perfect reference area and are at risk of falsely labeling natural variation as mine related "effects." A case study was completed to characterize the natural variability in EEM-defined endpoints during preexposure or baseline conditions. This involved completing a typical EEM study in future reference and exposure lakes surrounding a proposed uranium (U) mine in northern Saskatchewan, Canada. Moon Lake was sampled as the future exposure area as it is currently proposed to receive effluent from the U mine. Two reference areas were used: Slush Lake for both the fish population and benthic invertebrate community surveys and Lake C as a second reference area for the benthic invertebrate community survey. Moon Lake, Slush Lake, and Lake C are located in the same drainage basin in close proximity to one another. All 3 lakes contained similar water quality, fish communities, aquatic habitat, and a sediment composition largely comprised of fine-textured particles. The fish population survey consisted of a nonlethal northern pike (Esox lucius) and a lethal yellow perch (Perca flavescens) survey. A comparison of the 5 benthic invertebrate community effect endpoints, 4 nonlethal northern pike population effect endpoints

10. Defining clinical deterioration.

Science.gov (United States)

Jones, Daryl; Mitchell, Imogen; Hillman, Ken; Story, David

2013-08-01

11. Variable sound speed in interacting dark energy models

Science.gov (United States)

Linton, Mark S.; Pourtsidou, Alkistis; Crittenden, Robert; Maartens, Roy

2018-04-01

We consider a self-consistent and physical approach to interacting dark energy models described by a Lagrangian, and identify a new class of models with variable dark energy sound speed. We show that if the interaction between dark energy in the form of quintessence and cold dark matter is purely momentum exchange this generally leads to a dark energy sound speed that deviates from unity. Choosing a specific sub-case, we study its phenomenology by investigating the effects of the interaction on the cosmic microwave background and linear matter power spectrum. We also perform a global fitting of cosmological parameters using CMB data, and compare our findings to ΛCDM.

12. Modelling of W UMa-type variable stars

Directory of Open Access Journals (Sweden)

P. L. Skelton

2010-01-01

Full Text Available W Ursae Majoris (W UMa-type variable stars are over-contact eclipsing binary stars. To understand how these systems form and evolve requires observations spanning many years, followed by detailed models of as many of them as possible. The All Sky Automated Survey (ASAS has an extensive database of these stars. Using the ASAS V band photometric data, models of W UMatype stars are being created to determine the parameters of these stars. This paper discusses the classification of eclipsing binary stars, the methods used to model them as well as the results of the modelling of ASAS 120036–3915.6, an over-contact eclipsing binary star that appears to be changing its period.

13. Defining a conceptual model for the coastal aquifers of Mediterranean islands, an example from Corsica (France)

Science.gov (United States)

Santoni, Sebastien; Garel, Emilie; Huneau, Frederic

2016-04-01

A hydrochemical and isotope study was conducted to identify the flow paths, the recharge areas and the geochemical processes governing the evolution of groundwater in a Mediterranean carbonate coastal aquifer. The study is expected to improve the hydrogeological conceptual model based on environmental tracer investigations tools to characterise and quantify the aquifer system of Bonifacio. The groundwater resource represents the unique drinking water resource of the southern Corsica and the region faces a high pressures over the groundwater resource during the touristic period (2,000,000 tourists per year). A well-documented description of the geology and structure of this basin was the starting point for a detailed hydrogeochemical and isotopic study at the aquifer scale. A hydrochemical (physico-chemical parameters, major ions) and isotope (δ2H, δ18O, 3H) survey of rainwater and groundwater has been carried out monthly during almost two years. A local meteoric water line has been defined and marine, terrestrial and anthropogenic influences on the recharge water hydrochemistry have been described. Preferential recharge during autumn/winter of rainfall is observed and a depletion in the isotopic signature for some groundwater samples suggests a recharge in higher altitude from the surrounding granites. A modification of the input signal during infiltration through the unsaturated zone appears and the groundwater hydrochemistry displays differential variations in time and space, with the presence of inertial water bodies in the lower aquifer mainly. In this context, CFCs (CFC-11, CFC-12, CFC-113) and SF6 were used to evaluate groundwater residence time. CFCs have been relevant despite the presence of a deep unsaturated zone and the computed rate of groundwater renewal is pluriannual to multi-decadal. Natural SF6 was found in granites and has been used as a direct tracer of groundwater origin, highlighting its role in the aquifer lateral recharge. Strontium

14. Influence of rainfall spatial variability on rainfall-runoff modelling: Benefit of a simulation approach?

Science.gov (United States)

Emmanuel, I.; Andrieu, H.; Leblois, E.; Janey, N.; Payrastre, O.

2015-12-01

No consensus has yet been reached regarding the influence of rainfall spatial variability on runoff modelling at catchment outlets. To eliminate modelling and measurement errors, in addition to controlling rainfall variability and both the characteristics and hydrological behaviour of catchments, we propose to proceed by simulation. We have developed a simulation chain that combines a stream network model, a rainfall simulator and a distributed hydrological model (with four production functions and a distributed transfer function). Our objective here is to use this simulation chain as a simplified test bed in order to better understand the impact of the spatial variability of rainfall forcing. We applied the chain to contrasted situations involving catchments ranging from a few tens to several hundreds of square km2, thus corresponding to urban and peri-urban catchments for which surface runoff constitutes the dominant process. The results obtained confirm that the proposed simulation approach is helpful to better understand the influence of rainfall spatial variability on the catchment response. We have shown that significant dispersion exists not only between the various simulation scenarios (defined by a rainfall configuration and a catchment configuration), but also within each simulation scenario. These results show that the organisation of rainfall during the study event over the study catchment plays an important role, leading us to examine rainfall variability indexes capable of summarising the influence of rainfall spatial organisation on the catchment response. Thanks to the simulation chain, we have tested the variability indexes of Zoccatelli et al. (2010) and improved them by proposing two other indexes.

15. Bootstrap model selection had similar performance for selecting authentic and noise variables compared to backward variable elimination: a simulation study.

Science.gov (United States)

Austin, Peter C

2008-10-01

Researchers have proposed using bootstrap resampling in conjunction with automated variable selection methods to identify predictors of an outcome and to develop parsimonious regression models. Using this method, multiple bootstrap samples are drawn from the original data set. Traditional backward variable elimination is used in each bootstrap sample, and the proportion of bootstrap samples in which each candidate variable is identified as an independent predictor of the outcome is determined. The performance of this method for identifying predictor variables has not been examined. Monte Carlo simulation methods were used to determine the ability of bootstrap model selection methods to correctly identify predictors of an outcome when those variables that are selected for inclusion in at least 50% of the bootstrap samples are included in the final regression model. We compared the performance of the bootstrap model selection method to that of conventional backward variable elimination. Bootstrap model selection tended to result in an approximately equal proportion of selected models being equal to the true regression model compared with the use of conventional backward variable elimination. Bootstrap model selection performed comparatively to backward variable elimination for identifying the true predictors of a binary outcome.

16. Geochemical Modeling Of F Area Seepage Basin Composition And Variability

International Nuclear Information System (INIS)

Millings, M.; Denham, M.; Looney, B.

2012-01-01

From the 1950s through 1989, the F Area Seepage Basins at the Savannah River Site (SRS) received low level radioactive wastes resulting from processing nuclear materials. Discharges of process wastes to the F Area Seepage Basins followed by subsequent mixing processes within the basins and eventual infiltration into the subsurface resulted in contamination of the underlying vadose zone and downgradient groundwater. For simulating contaminant behavior and subsurface transport, a quantitative understanding of the interrelated discharge-mixing-infiltration system along with the resulting chemistry of fluids entering the subsurface is needed. An example of this need emerged as the F Area Seepage Basins was selected as a key case study demonstration site for the Advanced Simulation Capability for Environmental Management (ASCEM) Program. This modeling evaluation explored the importance of the wide variability in bulk wastewater chemistry as it propagated through the basins. The results are intended to generally improve and refine the conceptualization of infiltration of chemical wastes from seepage basins receiving variable waste streams and to specifically support the ASCEM case study model for the F Area Seepage Basins. Specific goals of this work included: (1) develop a technically-based 'charge-balanced' nominal source term chemistry for water infiltrating into the subsurface during basin operations, (2) estimate the nature of short term and long term variability in infiltrating water to support scenario development for uncertainty quantification (i.e., UQ analysis), (3) identify key geochemical factors that control overall basin water chemistry and the projected variability/stability, and (4) link wastewater chemistry to the subsurface based on monitoring well data. Results from this study provide data and understanding that can be used in further modeling efforts of the F Area groundwater plume. As identified in this study, key geochemical factors affecting basin

17. State variables for modelling thermohaline flow in rocks

Energy Technology Data Exchange (ETDEWEB)

Kroehn, Klaus-Peter

2010-12-15

Modelling thermohaline flow can easily involve complex physical interactions even if only the basic processes occurring in density-driven flow and heat transport are considered. In the light of these complexities it is of vital importance to know the thermal and hydraulic parameters required for the model and their dependencies as precise as possible. But also for designing a numerical simulator it is useful to know the dependencies of the parameters on the primary variables temperature, pressure and salinity in order to select an appropriate underlying mathematical model. The present report thus compiles the mathematical formulations for the fluid parameters from the literature. For each parameter the origin, at least one meaningful figure, a comment where necessary and conclusions about the influence of each primary variable on the thermo-hydraulic parameters are given. All required coefficients and auxiliary functions including dimensions are listed, too. Simulation of heat transport requires also information about some properties of the porous medium. Thus some complementary information about the properties of rocks is also given. In contrast to the properties for pure substances that are considered for the fluid the porous medium cannot be characterised as easily. Usually, the solids are a mixture of different materials with locally varying composition. Thus rather hints than exact values are provided for the rocks considered here. This compilation represents a complete set of mathematical formulations for fluid and solid properties to be used for thermohaline modelling that can directly used in the composing of a numerical simulator. (orig.)

18. (Re-)Defining the Egalitarian Partnership: A Model of Male-Female Interdependence.

Science.gov (United States)

Maier, J. Marcus

Researchers investigating marital egalitarianism have defined that concept in diverse ways, examining such factors as marital power, equal role sharing, and status and resource factors. Data from a series of 12 workshops held between 1982 and 1985 were used to develop an alternate means of conceptualizing an egalitarian partnership between spouses…

19. Supervised Gaussian process latent variable model for dimensionality reduction.

Science.gov (United States)

Gao, Xinbo; Wang, Xiumei; Tao, Dacheng; Li, Xuelong

2011-04-01

The Gaussian process latent variable model (GP-LVM) has been identified to be an effective probabilistic approach for dimensionality reduction because it can obtain a low-dimensional manifold of a data set in an unsupervised fashion. Consequently, the GP-LVM is insufficient for supervised learning tasks (e.g., classification and regression) because it ignores the class label information for dimensionality reduction. In this paper, a supervised GP-LVM is developed for supervised learning tasks, and the maximum a posteriori algorithm is introduced to estimate positions of all samples in the latent variable space. We present experimental evidences suggesting that the supervised GP-LVM is able to use the class label information effectively, and thus, it outperforms the GP-LVM and the discriminative extension of the GP-LVM consistently. The comparison with some supervised classification methods, such as Gaussian process classification and support vector machines, is also given to illustrate the advantage of the proposed method.

20. Geospatial models of climatological variables distribution over Colombian territory

International Nuclear Information System (INIS)

Baron Leguizamon, Alicia

2003-01-01

Diverse studies have dealt on the existing relation between the variables temperature about the air and precipitation with the altitude; nevertheless they have been precise analyses or by regions, but no of them has gotten to constitute itself in a tool that reproduces the space distribution, of the temperature or the precipitation, taking into account orography and allowing to obtain from her data on these variables in a certain place. Cradle in the raised relation and from the multi-annual monthly information of the temperature of the air and the precipitation, it was calculated the vertical gradients of temperature and the related the precipitation to the altitude. After it, with base in the data of altitude provided by the DEM, one calculated the values of temperature and precipitation, and those values were interpolated to generate geospatial models monthly

1. Niche variability and its consequences for species distribution modeling.

Science.gov (United States)

Michel, Matt J; Knouft, Jason H

2012-01-01

When species distribution models (SDMs) are used to predict how a species will respond to environmental change, an important assumption is that the environmental niche of the species is conserved over evolutionary time-scales. Empirical studies conducted at ecological time-scales, however, demonstrate that the niche of some species can vary in response to environmental change. We use habitat and locality data of five species of stream fishes collected across seasons to examine the effects of niche variability on the accuracy of projections from Maxent, a popular SDM. We then compare these predictions to those from an alternate method of creating SDM projections in which a transformation of the environmental data to similar scales is applied. The niche of each species varied to some degree in response to seasonal variation in environmental variables, with most species shifting habitat use in response to changes in canopy cover or flow rate. SDMs constructed from the original environmental data accurately predicted the occurrences of one species across all seasons and a subset of seasons for two other species. A similar result was found for SDMs constructed from the transformed environmental data. However, the transformed SDMs produced better models in ten of the 14 total SDMs, as judged by ratios of mean probability values at known presences to mean probability values at all other locations. Niche variability should be an important consideration when using SDMs to predict future distributions of species because of its prevalence among natural populations. The framework we present here may potentially improve these predictions by accounting for such variability.

2. Niche variability and its consequences for species distribution modeling.

Directory of Open Access Journals (Sweden)

Matt J Michel

Full Text Available When species distribution models (SDMs are used to predict how a species will respond to environmental change, an important assumption is that the environmental niche of the species is conserved over evolutionary time-scales. Empirical studies conducted at ecological time-scales, however, demonstrate that the niche of some species can vary in response to environmental change. We use habitat and locality data of five species of stream fishes collected across seasons to examine the effects of niche variability on the accuracy of projections from Maxent, a popular SDM. We then compare these predictions to those from an alternate method of creating SDM projections in which a transformation of the environmental data to similar scales is applied. The niche of each species varied to some degree in response to seasonal variation in environmental variables, with most species shifting habitat use in response to changes in canopy cover or flow rate. SDMs constructed from the original environmental data accurately predicted the occurrences of one species across all seasons and a subset of seasons for two other species. A similar result was found for SDMs constructed from the transformed environmental data. However, the transformed SDMs produced better models in ten of the 14 total SDMs, as judged by ratios of mean probability values at known presences to mean probability values at all other locations. Niche variability should be an important consideration when using SDMs to predict future distributions of species because of its prevalence among natural populations. The framework we present here may potentially improve these predictions by accounting for such variability.

3. Initial CGE Model Results Summary Exogenous and Endogenous Variables Tests

Energy Technology Data Exchange (ETDEWEB)

Edwards, Brian Keith [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Boero, Riccardo [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Rivera, Michael Kelly [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

2017-08-07

The following discussion presents initial results of tests of the most recent version of the National Infrastructure Simulation and Analysis Center Dynamic Computable General Equilibrium (CGE) model developed by Los Alamos National Laboratory (LANL). The intent of this is to test and assess the model’s behavioral properties. The test evaluated whether the predicted impacts are reasonable from a qualitative perspective. This issue is whether the predicted change, be it an increase or decrease in other model variables, is consistent with prior economic intuition and expectations about the predicted change. One of the purposes of this effort is to determine whether model changes are needed in order to improve its behavior qualitatively and quantitatively.

4. Modelling bulk canopy resistance from climatic variables for evapotranspiration estimation

Science.gov (United States)

Perez, P. J.; Martinez-Cob, A.; Lecina, S.; Castellvi, F.; Villalobos, F. J.

2003-04-01

Evapotranspiration is a component of the hydrological cycle whose accurate computation is needed for an adequate management of water resources. In particular, a high level of accuracy in crop evapotranspiration estimation can represent an important saving of economical and water resources at planning and management of irrigated areas. In the evapotranspiration process, bulk canopy resistance (r_c) is a primary factor and its correct modelling remains an important problem in the Penman-Monteith (PM) method, not only for tall crops but also for medium height and short crops under water stress. In this work, an alternative approach for modelling canopy resistance is presented against th PM method with constant canopy resistance. Variable r_c values are computed as function of a climatic resistance and compared with other two models, Katerji and Perrier and Todorovic. Hourly evapotranspiration values (ET_o) over grass were obtained with a weighing lysimeter and an eddy covariance system at the Ebro and Guadalquivir valleys (Spain) respectively. The main objective is to evaluate whether the use of variable rather than fixed r_c values, would improve the ET_o estimates obtained by applying the PM equation under the semiarid conditions of the two sites, where evaporative demand is high particularly during summer.

5. Modeling intraindividual variability with repeated measures data methods and applications

CERN Document Server

Hershberger, Scott L

2013-01-01

This book examines how individuals behave across time and to what degree that behavior changes, fluctuates, or remains stable.It features the most current methods on modeling repeated measures data as reported by a distinguished group of experts in the field. The goal is to make the latest techniques used to assess intraindividual variability accessible to a wide range of researchers. Each chapter is written in a ""user-friendly"" style such that even the ""novice"" data analyst can easily apply the techniques.Each chapter features:a minimum discussion of mathematical detail;an empirical examp

6. Context Tree Estimation in Variable Length Hidden Markov Models

OpenAIRE

Dumont, Thierry

2011-01-01

We address the issue of context tree estimation in variable length hidden Markov models. We propose an estimator of the context tree of the hidden Markov process which needs no prior upper bound on the depth of the context tree. We prove that the estimator is strongly consistent. This uses information-theoretic mixture inequalities in the spirit of Finesso and Lorenzo(Consistent estimation of the order for Markov and hidden Markov chains(1990)) and E.Gassiat and S.Boucheron (Optimal error exp...

7. Incompatible quantum measurements admitting a local-hidden-variable model

Science.gov (United States)

Quintino, Marco Túlio; Bowles, Joseph; Hirsch, Flavien; Brunner, Nicolas

2016-05-01

The observation of quantum nonlocality, i.e., quantum correlations violating a Bell inequality, implies the use of incompatible local quantum measurements. Here we consider the converse question. That is, can any set of incompatible measurements be used in order to demonstrate Bell inequality violation? Our main result is to construct a local hidden variable model for an incompatible set of qubit measurements. Specifically, we show that if Alice uses this set of measurements, then for any possible shared entangled state and any possible dichotomic measurements performed by Bob, the resulting statistics are local. This represents significant progress towards proving that measurement incompatibility does not imply Bell nonlocality in general.

8. Estimation and variable selection for generalized additive partial linear models

KAUST Repository

Wang, Li

2011-08-01

We study generalized additive partial linear models, proposing the use of polynomial spline smoothing for estimation of nonparametric functions, and deriving quasi-likelihood based estimators for the linear parameters. We establish asymptotic normality for the estimators of the parametric components. The procedure avoids solving large systems of equations as in kernel-based procedures and thus results in gains in computational simplicity. We further develop a class of variable selection procedures for the linear parameters by employing a nonconcave penalized quasi-likelihood, which is shown to have an asymptotic oracle property. Monte Carlo simulations and an empirical example are presented for illustration. © Institute of Mathematical Statistics, 2011.

9. Quantifying uncertainty, variability and likelihood for ordinary differential equation models

LENUS (Irish Health Repository)

Weisse, Andrea Y

2010-10-28

Abstract Background In many applications, ordinary differential equation (ODE) models are subject to uncertainty or variability in initial conditions and parameters. Both, uncertainty and variability can be quantified in terms of a probability density function on the state and parameter space. Results The partial differential equation that describes the evolution of this probability density function has a form that is particularly amenable to application of the well-known method of characteristics. The value of the density at some point in time is directly accessible by the solution of the original ODE extended by a single extra dimension (for the value of the density). This leads to simple methods for studying uncertainty, variability and likelihood, with significant advantages over more traditional Monte Carlo and related approaches especially when studying regions with low probability. Conclusions While such approaches based on the method of characteristics are common practice in other disciplines, their advantages for the study of biological systems have so far remained unrecognized. Several examples illustrate performance and accuracy of the approach and its limitations.

10. Regional modeling of decadal rainfall variability over the Sahel

Energy Technology Data Exchange (ETDEWEB)

Herceg, Deborah [Rutgers University, Institute of Marine and Coastal Sciences (IMCS), New Brunswick, NJ (United States); Sobel, Adam H. [Columbia University, Department of Applied Physics and Applied Mathematics, Department of Earth and Environmental Sciences, New York, NY (United States); Columbia University, International Research Institute for Climate and Society (IRI), Palisades, NY (United States); Sun, Liqiang [Columbia University, International Research Institute for Climate and Society (IRI), Palisades, NY (United States)

2007-07-15

A regional climate model is used to investigate the mechanism of interdecadal rainfall variability, specifically the drought of the 1970s and 1980s, in the Sahel region of Africa. The model is the National Center for Environmental Prediction's (NCEPs) Regional Spectral Model (RSM97), with a horizontal resolution of approximately equivalent to a grid spacing of 50 km, nested within the ECHAM4.5 atmospheric general circulation model (AGCM), which in turn was forced by observed sea surface temperature (SST). Simulations for the July-September season of the individual years 1955 and 1986 produced wet conditions in 1955 and dry conditions in 1986 in the Sahel, as observed. Additional July-September simulations were run forced by SSTs averaged for each month over the periods 1950-1959 and the 1978-1987. These simulations yielded wet conditions in the 1950-1959 case and dry conditions in the 1978-1987 case, confirming the role of SST forcing in decadal variability in particular. To test the hypothesis that the SST influences Sahel rainfall via stabilization of the tropospheric sounding, simulations were performed in which the temperature field from the AGCM was artificially modified before it was used to force the regional model. We modified the original 1955 ECHAM4.5 temperature profiles by adding a horizontally uniform, vertically varying temperature increase, taken from the 1986-1955 tropical mean warming in either the AGCM or the NCEP/National Center for Atmospheric Research Reanalysis. When compared to the 1955 simulations without the added tropospheric warming, these simulations show a drying in the Sahel similar to that in the 1986-1955 difference and to the decadal difference between the 1980s and 1950s. This suggests that the tropospheric warming may have been, at least in part, the agent by which the SST increases led to the Sahel drought of the 1970s and 1980s. (orig.)

11. Ozone Concentration Prediction via Spatiotemporal Autoregressive Model With Exogenous Variables

Science.gov (United States)

Kamoun, W.; Senoussi, R.

2009-04-01

Forecast of environmental variables are nowadays of main concern for public health or agricultural management. In this context a large literature is devoted to spatio-temporal modelling of these variables using different statistical approaches. However, most of studies ignored the potential contribution of local (e.g. meteorological and/or geographical) covariables as well as the dynamical characteristics of observations. In this study, we present a spatiotemporal short term forecasting model for ozone concentration based on regularly observed covariables in predefined geographical sites. Our driving system simply combines a multidimensional second order autoregressive structured process with a linear regression model over influent exogenous factors and reads as follows: 2 q j Z (t) = A (Î&,cedil;D )Ã- [ αiZ(t- i)]+ B (Î&,cedil;D )Ã- [ βjX (t)]+ É(t) i=1 j=1 Z(t)=(Z1(t),â¦,Zn(t)) represents the vector of ozone concentration at time t of the n geographical sites, whereas Xj(t)=(X1j(t),â¦,Xnj(t)) denotes the jth exogenous variable observed over these sites. The nxn matrix functions A and B account for the spatial relationships between sites through the inter site distance matrix D and a vector parameter Î&.cedil; Multidimensional white noise É is assumed to be Gaussian and spatially correlated but temporally independent. A covariance structure of Z that takes account of noise spatial dependences is deduced under a stationary hypothesis and then included in the likelihood function. Statistical model and estimation procedure: Contrarily to the widely used choice of a {0,1}-valued neighbour matrix A, we put forward two more natural choices of exponential or power decay. Moreover, the model revealed enough stable to readily accommodate the crude observations without the usual tedious and somewhat arbitrarily variable transformations. Data set and preliminary analysis: In our case, ozone variable represents here the daily maximum ozone

12. Blackboard architecture and qualitative model in a computer aided assistant designed to define computers for HEP computing

International Nuclear Information System (INIS)

Nodarse, F.F.; Ivanov, V.G.

1991-01-01

Using BLACKBOARD architecture and qualitative model, an expert systm was developed to assist the use in defining the computers method for High Energy Physics computing. The COMEX system requires an IBM AT personal computer or compatible with than 640 Kb RAM and hard disk. 5 refs.; 9 figs

13. Instrumental variables estimation under a structural Cox model

DEFF Research Database (Denmark)

Martinussen, Torben; Nørbo Sørensen, Ditte; Vansteelandt, Stijn

2017-01-01

Instrumental variable (IV) analysis is an increasingly popular tool for inferring the effect of an exposure on an outcome, as witnessed by the growing number of IV applications in epidemiology, for instance. The majority of IV analyses of time-to-event endpoints are, however, dominated by heuristic...... and instruments. We propose a novel class of estimators and derive their asymptotic properties. The methodology is illustrated using two real data applications, and using simulated data....... approaches. More rigorous proposals have either sidestepped the Cox model, or considered it within a restrictive context with dichotomous exposure and instrument, amongst other limitations. The aim of this article is to reconsider IV estimation under a structural Cox model, allowing for arbitrary exposure...

14. A rumor spreading model with variable forgetting rate

Science.gov (United States)

Zhao, Laijun; Xie, Wanlin; Gao, H. Oliver; Qiu, Xiaoyan; Wang, Xiaoli; Zhang, Shuhai

2013-12-01

A rumor spreading model with the consideration of forgetting rate changing over time is examined in small-world networks. The mean-field equations are derived to describe the dynamics of rumor spreading in small-world networks. Further, numerical solutions are conducted on LiveJournal, an online social blogging platform, to better understand the performance of the model. Results show that the forgetting rate has a significant impact on the final size of rumor spreading: the larger the initial forgetting rate or the faster the forgetting speed, the smaller the final size of the rumor spreading. Numerical solutions also show that the final size of rumor spreading is much larger under a variable forgetting rate compared to that under a constant forgetting rate.

15. A step-indexed Kripke model of hidden state via recursive properties on recursively defined metric spaces

DEFF Research Database (Denmark)

Schwinghammer, Jan; Birkedal, Lars; Støvring, Kristian

2011-01-01

Frame and anti-frame rules have been proposed as proof rules for modular reasoning about programs. Frame rules allow one to hide irrelevant parts of the state during verification, whereas the anti-frame rule allows one to hide local state from the context. We give the first sound model for Chargu......´eraud and Pottier’s type and capability system including both frame and anti-frame rules. The model is a possible worlds model based on the operational semantics and step-indexed heap relations, and the worlds are constructed as a recursively defined predicate on a recursively defined metric space. We also extend...... the model to account for Pottier’s generalized frame and anti-frame rules, where invariants are generalized to families of invariants indexed over pre-orders. This generalization enables reasoning about some well-bracketed as well as (locally) monotonic uses of local state....

16. Modeling the variability of shapes of a human placenta.

Science.gov (United States)

Yampolsky, M; Salafia, C M; Shlakhter, O; Haas, D; Eucker, B; Thorp, J

2008-09-01

Placentas are generally round/oval in shape, but "irregular" shapes are common. In the Collaborative Perinatal Project data, irregular shapes were associated with lower birth weight for placental weight, suggesting variably shaped placentas have altered function. (I) Using a 3D one-parameter model of placental vascular growth based on Diffusion Limited Aggregation (an accepted model for generating highly branched fractals), models were run with a branching density growth parameter either fixed or perturbed at either 5-7% or 50% of model growth. (II) In a data set with detailed measures of 1207 placental perimeters, radial standard deviations of placental shapes were calculated from the umbilical cord insertion, and from the centroid of the shape (a biologically arbitrary point). These two were compared to the difference between the observed scaling exponent and the Kleiber scaling exponent (0.75), considered optimal for vascular fractal transport systems. Spearman's rank correlation considered pcentroid) was associated with differences from the Kleiber exponent (p=0.006). A dynamical DLA model recapitulates multilobate and "star" placental shapes via changing fractal branching density. We suggest that (1) irregular placental outlines reflect deformation of the underlying placental fractal vascular network, (2) such irregularities in placental outline indicate sub-optimal branching structure of the vascular tree, and (3) this accounts for the lower birth weight observed in non-round/oval placentas in the Collaborative Perinatal Project.

17. Constrained variability of modeled T:ET ratio across biomes

Science.gov (United States)

Fatichi, Simone; Pappas, Christoforos

2017-07-01

A large variability (35-90%) in the ratio of transpiration to total evapotranspiration (referred here as T:ET) across biomes or even at the global scale has been documented by a number of studies carried out with different methodologies. Previous empirical results also suggest that T:ET does not covary with mean precipitation and has a positive dependence on leaf area index (LAI). Here we use a mechanistic ecohydrological model, with a refined process-based description of evaporation from the soil surface, to investigate the variability of T:ET across biomes. Numerical results reveal a more constrained range and higher mean of T:ET (70 ± 9%, mean ± standard deviation) when compared to observation-based estimates. T:ET is confirmed to be independent from mean precipitation, while it is found to be correlated with LAI seasonally but uncorrelated across multiple sites. Larger LAI increases evaporation from interception but diminishes ground evaporation with the two effects largely compensating each other. These results offer mechanistic model-based evidence to the ongoing research about the patterns of T:ET and the factors influencing its magnitude across biomes.

18. Hierarchical Bayesian nonparametric mixture models for clustering with variable relevance determination.

Science.gov (United States)

Yau, Christopher; Holmes, Chris

2011-07-01

We propose a hierarchical Bayesian nonparametric mixture model for clustering when some of the covariates are assumed to be of varying relevance to the clustering problem. This can be thought of as an issue in variable selection for unsupervised learning. We demonstrate that by defining a hierarchical population based nonparametric prior on the cluster locations scaled by the inverse covariance matrices of the likelihood we arrive at a 'sparsity prior' representation which admits a conditionally conjugate prior. This allows us to perform full Gibbs sampling to obtain posterior distributions over parameters of interest including an explicit measure of each covariate's relevance and a distribution over the number of potential clusters present in the data. This also allows for individual cluster specific variable selection. We demonstrate improved inference on a number of canonical problems.

19. Regression based modeling of vegetation and climate variables for the Amazon rainforests

Science.gov (United States)

Kodali, A.; Khandelwal, A.; Ganguly, S.; Bongard, J.; Das, K.

2015-12-01

Both short-term (weather) and long-term (climate) variations in the atmosphere directly impact various ecosystems on earth. Forest ecosystems, especially tropical forests, are crucial as they are the largest reserves of terrestrial carbon sink. For example, the Amazon forests are a critical component of global carbon cycle storing about 100 billion tons of carbon in its woody biomass. There is a growing concern that these forests could succumb to precipitation reduction in a progressively warming climate, leading to release of significant amount of carbon in the atmosphere. Therefore, there is a need to accurately quantify the dependence of vegetation growth on different climate variables and obtain better estimates of drought-induced changes to atmospheric CO2. The availability of globally consistent climate and earth observation datasets have allowed global scale monitoring of various climate and vegetation variables such as precipitation, radiation, surface greenness, etc. Using these diverse datasets, we aim to quantify the magnitude and extent of ecosystem exposure, sensitivity and resilience to droughts in forests. The Amazon rainforests have undergone severe droughts twice in last decade (2005 and 2010), which makes them an ideal candidate for the regional scale analysis. Current studies on vegetation and climate relationships have mostly explored linear dependence due to computational and domain knowledge constraints. We explore a modeling technique called symbolic regression based on evolutionary computation that allows discovery of the dependency structure without any prior assumptions. In symbolic regression the population of possible solutions is defined via trees structures. Each tree represents a mathematical expression that includes pre-defined functions (mathematical operators) and terminal sets (independent variables from data). Selection of these sets is critical to computational efficiency and model accuracy. In this work we investigate

20. Multi-scale climate modelling over Southern Africa using a variable-resolution global model

CSIR Research Space (South Africa)

Engelbrecht, FA

2011-12-01

Full Text Available -resolution global simulations, to ultra-high resolution simulations at the micro-scale. The model used for these experiments is a variable-resolution global atmospheric model, the conformal-cubic atmospheric model (CCAM). It is shown that CCAM may be used to obtain...

1. Defining wet season water quality target concentrations for ecosystem conservation using empirical light attenuation models: A case study in the Great Barrier Reef (Australia).

Science.gov (United States)

Petus, Caroline; Devlin, Michelle; Teixera da Silva, Eduardo; Lewis, Stephen; Waterhouse, Jane; Wenger, Amelia; Bainbridge, Zoe; Tracey, Dieter

2018-05-01

Optically active water quality components (OAC) transported by flood plumes to nearshore marine environments affect light levels. The definition of minimum OAC concentrations that must be maintained to sustain sufficient light levels for conservation of light-dependant coastal ecosystems exposed to flood waters is necessary to guide management actions in adjacent catchments. In this study, a framework for defining OAC target concentrations using empirical light attenuation models is proposed and applied to the Wet Tropics region of the Great Barrier Reef (GBR) (Queensland, Australia). This framework comprises several steps: (i) light attenuation (Kd(PAR)) profiles and OAC measurements, including coloured dissolved organic matter (CDOM), chlorophyll-a (Chl-a) and suspended particulate matter (SPM) concentrations collected in flood waters; (ii) empirical light attenuation models used to define the contribution of CDOM, Chl-a and SPM to the light attenuation, and; (iii) translation of empirical models into manageable OAC target concentrations specific for wet season conditions. Results showed that (i) Kd(PAR) variability in the Wet Tropics flood waters is driven primarily by SPM and CDOM, with a lower contribution from Chl-a (r2 = 0.5, p reefs and seagrass ecosystems exposed to 'brownish' flood waters. Additional data will be collected to validate the light attenuation models and the wet season target concentration which in future will be incorporated into wider catchment modelling efforts to improve coastal water quality in the Wet Tropics and the GBR. Copyright © 2018 Elsevier Ltd. All rights reserved.

2. Emerging technologies to create inducible and genetically defined porcine cancer models

Directory of Open Access Journals (Sweden)

Lawrence B Schook

2016-02-01

Full Text Available There is an emerging need for new animal models that address unmet translational cancer research requirements. Transgenic porcine models provide an exceptional opportunity due to their genetic, anatomic and physiological similarities with humans. Due to recent advances in the sequencing of domestic animal genomes and the development of new organism cloning technologies, it is now very feasible to utilize pigs as a malleable species, with similar anatomic and physiological features with humans, in which to develop cancer models. In this review, we discuss genetic modification technologies successfully used to produce porcine biomedical models, in particular the Cre-loxP System as well as major advances and perspectives the CRISPR/Cas9 System. Recent advancements in porcine tumor modeling and genome editing will bring porcine models to the forefront of translational cancer research.

3. Transient modelling of a natural circulation loop under variable pressure

Energy Technology Data Exchange (ETDEWEB)

Vianna, Andre L.B.; Faccini, Jose L.H.; Su, Jian, E-mail: avianna@nuclear.ufrj.br, E-mail: sujian@nuclear.ufrj.br, E-mail: faccini@ien.gov.br [Coordenacao de Pos-Graduacao e Pesquisa de Engenharia (COPPE/UFRJ), Rio de Janeiro, RJ (Brazil). Programa de Engenharia Nuclear; Instituto de Engenharia Nuclear (IEN/CNEN-RJ), Rio de Janeiro, RJ (Brazil). Lab. de Termo-Hidraulica Experimental

2017-07-01

The objective of the present work is to model the transient operation of a natural circulation loop, which is one-tenth scale in height to a typical Passive Residual Heat Removal system (PRHR) of an Advanced Pressurized Water Nuclear Reactor and was designed to meet the single and two-phase flow similarity criteria to it. The loop consists of a core barrel with electrically heated rods, upper and lower plena interconnected by hot and cold pipe legs to a seven-tube shell heat exchanger of countercurrent design, and an expansion tank with a descending tube. A long transient characterized the loop operation, during which a phenomenon of self-pressurization, without self-regulation of the pressure, was experimentally observed. This represented a unique situation, named natural circulation under variable pressure (NCVP). The self-pressurization was originated in the air trapped in the expansion tank and compressed by the loop water dilatation, as it heated up during each experiment. The mathematical model, initially oriented to the single-phase flow, included the heat capacity of the structure and employed a cubic polynomial approximation for the density, in the buoyancy term calculation. The heater was modelled taking into account the different heat capacities of the heating elements and the heater walls. The heat exchanger was modelled considering the coolant heating, during the heat exchanging process. The self-pressurization was modelled as an isentropic compression of a perfect gas. The whole model was computationally implemented via a set of finite difference equations. The corresponding computational algorithm of solution was of the explicit, marching type, as for the time discretization, in an upwind scheme, regarding the space discretization. The computational program was implemented in MATLAB. Several experiments were carried out in the natural circulation loop, having the coolant flow rate and the heating power as control parameters. The variables used in the

4. Evidence evaluation in fingerprint comparison and automated fingerprint identification systems--Modeling between finger variability.

Science.gov (United States)

Egli Anthonioz, N M; Champod, C

2014-02-01

In the context of the investigation of the use of automated fingerprint identification systems (AFIS) for the evaluation of fingerprint evidence, the current study presents investigations into the variability of scores from an AFIS system when fingermarks from a known donor are compared to fingerprints that are not from the same source. The ultimate goal is to propose a model, based on likelihood ratios, which allows the evaluation of mark-to-print comparisons. In particular, this model, through its use of AFIS technology, benefits from the possibility of using a large amount of data, as well as from an already built-in proximity measure, the AFIS score. More precisely, the numerator of the LR is obtained from scores issued from comparisons between impressions from the same source and showing the same minutia configuration. The denominator of the LR is obtained by extracting scores from comparisons of the questioned mark with a database of non-matching sources. This paper focuses solely on the assignment of the denominator of the LR. We refer to it by the generic term of between-finger variability. The issues addressed in this paper in relation to between-finger variability are the required sample size, the influence of the finger number and general pattern, as well as that of the number of minutiae included and their configuration on a given finger. Results show that reliable estimation of between-finger variability is feasible with 10,000 scores. These scores should come from the appropriate finger number/general pattern combination as defined by the mark. Furthermore, strategies of obtaining between-finger variability when these elements cannot be conclusively seen on the mark (and its position with respect to other marks for finger number) have been presented. These results immediately allow case-by-case estimation of the between-finger variability in an operational setting. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

5. Defining assessment projects and scenarios for policy support: Use of ontology in Integrated Assessment Modelling

NARCIS (Netherlands)

Janssen, S.; Ewert, F.; Hongtao, Li; Anthanasiadis, I.N.; Wien, J.J.F.; Therond, O.; Knapen, M.J.R.; Bezlepkina, I.; Alkan-Olsson, J.; Rizzoli, A.E.; Belhouchette, H.; Svensson, M.; Ittersum, van M.K.

2009-01-01

Integrated Assessment and Modelling (IAM) provides an interdisciplinary approach to support ex-ante decision-making by combining quantitative models representing different systems and scales into a framework for integrated assessment. Scenarios in IAM are developed in the interaction between

6. Models for Defining LCM, Monitoring LCM Practice and Assessing its Feasibility

DEFF Research Database (Denmark)

Sanchez, Inés Garcia; Wenzel, Henrik; Jørgensen, Michael Søgaard

2005-01-01

A theoretical approach to life-cycle management (LCM) is used to develop an operational model helping companies to understand their present level of LCM and implement LCM as management strategy. As part of the development the model has been tested and evaluated by a number of Danish companies wit...

7. Communicative Syllabus Design: A Sociolinguistic Model for Defining the Content of Purpose-Specific Language Programmes.

Science.gov (United States)

Munby, John

The design of a dynamic processing model for teaching English for Specific Purposes (ESP) is discussed in this book. The model starts with the learner and ends with the learner's target communicative competence in the particular area needed. In this communicative approach to syllabus design, the first chapter is a discussion of theories of…

8. Defining the role of polyamines in colon carcinogenesis using mouse models

Directory of Open Access Journals (Sweden)

Natalia A Ignatenko

2011-01-01

Full Text Available Genetics and diet are both considered important risk determinants for colorectal cancer, a leading cause of death in the US and worldwide. Genetically engineered mouse (GEM models have made a significant contribution to the characterization of colorectal cancer risk factors. Reliable, reproducible, and clinically relevant animal models help in the identification of the molecular events associated with disease progression and in the development of effictive treatment strategies. This review is focused on the use of mouse models for studying the role of polyamines in colon carcinogenesis. We describe how the available mouse models of colon cancer such as the multiple intestinal neoplasia (Min mice and knockout genetic models facilitate understanding of the role of polyamines in colon carcinogenesis and help in the development of a rational strategy for colon cancer chemoprevention.

9. Resolving structural variability in network models and the brain.

Directory of Open Access Journals (Sweden)

Florian Klimm

2014-03-01

Full Text Available Large-scale white matter pathways crisscrossing the cortex create a complex pattern of connectivity that underlies human cognitive function. Generative mechanisms for this architecture have been difficult to identify in part because little is known in general about mechanistic drivers of structured networks. Here we contrast network properties derived from diffusion spectrum imaging data of the human brain with 13 synthetic network models chosen to probe the roles of physical network embedding and temporal network growth. We characterize both the empirical and synthetic networks using familiar graph metrics, but presented here in a more complete statistical form, as scatter plots and distributions, to reveal the full range of variability of each measure across scales in the network. We focus specifically on the degree distribution, degree assortativity, hierarchy, topological Rentian scaling, and topological fractal scaling--in addition to several summary statistics, including the mean clustering coefficient, the shortest path-length, and the network diameter. The models are investigated in a progressive, branching sequence, aimed at capturing different elements thought to be important in the brain, and range from simple random and regular networks, to models that incorporate specific growth rules and constraints. We find that synthetic models that constrain the network nodes to be physically embedded in anatomical brain regions tend to produce distributions that are most similar to the corresponding measurements for the brain. We also find that network models hardcoded to display one network property (e.g., assortativity do not in general simultaneously display a second (e.g., hierarchy. This relative independence of network properties suggests that multiple neurobiological mechanisms might be at play in the development of human brain network architecture. Together, the network models that we develop and employ provide a potentially useful

10. Quantitative Analysis of Memristance Defined Exponential Model for Multi-bits Titanium Dioxide Memristor Memory Cell

Directory of Open Access Journals (Sweden)

DAOUD, A. A. D.

2016-05-01

Full Text Available The ability to store multiple bits in a single memristor based memory cell is a key feature for high-capacity memory packages. Studying multi-bit memristor circuits requires high accuracy in modelling the memristance change. A memristor model based on a novel definition of memristance is proposed. A design of a single memristor memory cell using the proposed model for the platinum electrodes titanium dioxide memristor is illustrated. A specific voltage pulse is used with varying its parameters (amplitude or pulse width to store different number of states in a single memristor. New state variation parameters associated with the utilized model are provided and their effects on write and read processes of memristive multi-states are analysed. PSPICE simulations are also held, and they show a good agreement with the data obtained from the analysis.

11. Enhancing metaproteomics-The value of models and defined environmental microbial systems

Energy Technology Data Exchange (ETDEWEB)

Herbst, Florian-Alexander [Department of Chemistry and Bioscience, Center for Microbial Communities, Aalborg University, Aalborg Denmark; Lünsmann, Vanessa [Department of Proteomics, Helmholtz Centre for Environmental Research-UFZ, Leipzig Germany; Department of Environmental Biotechnology, Helmholtz Centre for Environmental Research-UFZ, Leipzig Germany; Kjeldal, Henrik [Department of Chemistry and Bioscience, Center for Microbial Communities, Aalborg University, Aalborg Denmark; Jehmlich, Nico [Department of Proteomics, Helmholtz Centre for Environmental Research-UFZ, Leipzig Germany; Tholey, Andreas [Systematic Proteome Research and Bioanalytics, Institute for Experimental Medicine, Christian-Albrechts-Universität zu Kiel, Kiel Germany; von Bergen, Martin [Department of Chemistry and Bioscience, Center for Microbial Communities, Aalborg University, Aalborg Denmark; Department of Proteomics, Helmholtz Centre for Environmental Research-UFZ, Leipzig Germany; Nielsen, Jeppe Lund [Department of Chemistry and Bioscience, Center for Microbial Communities, Aalborg University, Aalborg Denmark; Hettich, Robert L. [Chemical Sciences Division, Oak Ridge National Lab, Oak Ridge TN USA; Seifert, Jana [Institute of Animal Science, University of Hohenheim, Stuttgart Germany; Nielsen, Per Halkjaer [Department of Chemistry and Bioscience, Center for Microbial Communities, Aalborg University, Aalborg Denmark

2016-01-21

Metaproteomics - the large-scale characterization of the entire protein complement of environmental microbiota at a given point in time - added unique features and possibilities to study environmental microbial communities and to unravel these “black boxes”. New technical challenges arose which were not an issue for classical proteome analytics before and choosing the appropriate model system applicable to the research question can be difficult. Here, we reviewed different model systems for metaproteome analysis. Following a short introduction to microbial communities and systems, we discussed the most used systems ranging from technical systems over rhizospheric models to systems for the medical field. This includes acid mine drainage, anaerobic digesters, activated sludge, planted fixed bed reactors, gastrointestinal simulators and in vivo models. Model systems are useful to evaluate the challenges encountered within (but not limited to) metaproteomics, including species complexity and coverage, biomass availability or reliable protein extraction. The implementation of model systems can be considered as a step forward to better understand microbial responses and ecological distribution of member organisms. In the future, novel improvements are necessary to fully engage complex environmental systems.

12. Bayesian analysis of growth curves using mixed models defined by stochastic differential equations.

Science.gov (United States)

Donnet, Sophie; Foulley, Jean-Louis; Samson, Adeline

2010-09-01

Growth curve data consist of repeated measurements of a continuous growth process over time in a population of individuals. These data are classically analyzed by nonlinear mixed models. However, the standard growth functions used in this context prescribe monotone increasing growth and can fail to model unexpected changes in growth rates. We propose to model these variations using stochastic differential equations (SDEs) that are deduced from the standard deterministic growth function by adding random variations to the growth dynamics. A Bayesian inference of the parameters of these SDE mixed models is developed. In the case when the SDE has an explicit solution, we describe an easily implemented Gibbs algorithm. When the conditional distribution of the diffusion process has no explicit form, we propose to approximate it using the Euler-Maruyama scheme. Finally, we suggest validating the SDE approach via criteria based on the predictive posterior distribution. We illustrate the efficiency of our method using the Gompertz function to model data on chicken growth, the modeling being improved by the SDE approach. © 2009 INRA, Government of France.

13. On joint deterministic grid modeling and sub-grid variability conceptual framework for model evaluation

Science.gov (United States)

Ching, Jason; Herwehe, Jerold; Swall, Jenise

The general situation (but exemplified in urban areas), where a significant degree of sub-grid variability (SGV) exists in grid models poses problems when comparing grid-based air-quality modeling results with observations. Typically, grid models ignore or parameterize processes and features that are at their sub-grid scale. Also, observations may be obtained in an area where significant spatial variability in the concentration fields exists. Consequently, model results and observations cannot be expected to be equal. To address this issue, we suggest a framework that can provide for qualitative judgments on model performance based on comparing observations to the grid predictions and its SGV distribution. Further, we (a) explore some characteristics of SGV, (b) comment on the contributions to SGV and (c) examine the implications to the modeling results at coarse grid resolution using examples from fine scale grid modeling of the Community Multi-scale Air Quality (CMAQ) modeling system.

14. Regional Climate Variability Under Model Simulations of Solar Geoengineering

Science.gov (United States)

Dagon, Katherine; Schrag, Daniel P.

2017-11-01

Solar geoengineering has been shown in modeling studies to successfully mitigate global mean surface temperature changes from greenhouse warming. Changes in land surface hydrology are complicated by the direct effect of carbon dioxide (CO2) on vegetation, which alters the flux of water from the land surface to the atmosphere. Here we investigate changes in boreal summer climate variability under solar geoengineering using multiple ensembles of model simulations. We find that spatially uniform solar geoengineering creates a strong meridional gradient in the Northern Hemisphere temperature response, with less consistent patterns in precipitation, evapotranspiration, and soil moisture. Using regional summertime temperature and precipitation results across 31-member ensembles, we show a decrease in the frequency of heat waves and consecutive dry days under solar geoengineering relative to a high-CO2 world. However in some regions solar geoengineering of this amount does not completely reduce summer heat extremes relative to present day climate. In western Russia and Siberia, an increase in heat waves is connected to a decrease in surface soil moisture that favors persistent high temperatures. Heat waves decrease in the central United States and the Sahel, while the hydrologic response increases terrestrial water storage. Regional changes in soil moisture exhibit trends over time as the model adjusts to solar geoengineering, particularly in Siberia and the Sahel, leading to robust shifts in climate variance. These results suggest potential benefits and complications of large-scale uniform climate intervention schemes.

15. Development of a plug-in for Variability Modeling in Software Product Lines

Directory of Open Access Journals (Sweden)

María Lucía López-Araujo

2012-03-01

16. STUDENT-DEFINED QUALITY BY KANO MODEL: A CASE STUDY OF ENGINEERING STUDENTS IN INDIA

Directory of Open Access Journals (Sweden)

Ismail Wilson Taifa

2016-09-01

Full Text Available Engineering Students in India like elsewhere worldwide need well designed classrooms furniture which can enable them to attend lectures without negative impact in the long run. Engineering students from India have not yet been involved in suggesting their requirements for improving the mostly out-dated furniture at their colleges. Among the available improvement techniques, Kano Model is one of the most effective improvement approaches. The main objective of the study was to identify and categorise all the main attributes regarding the classrooms furniture for the purpose of increasing student satisfaction in the long run. Kano Model has been well applied to make an exhaustive list of requirements for redesigning classroom furniture. Cronbach Alpha was computed with the help of SPSS 16.0 for validation purpose and it ranged between 0.8 and 0.9 which is a good internal consistency. Further research can be done by integrating Kano Model with Quality Function Deployment.

17. Innovation and dynamic capabilities of the firm: Defining an assessment model

Directory of Open Access Journals (Sweden)

André Cherubini Alves

2017-05-01

Full Text Available Innovation and dynamic capabilities have gained considerable attention in both academia and practice. While one of the oldest inquiries in economic and strategy literature involves understanding the features that drive business success and a firm’s perpetuity, the literature still lacks a comprehensive model of innovation and dynamic capabilities. This study presents a model that assesses firms’ innovation and dynamic capabilities perspectives based on four essential capabilities: development, operations, management, and transaction capabilities. Data from a survey of 1,107 Brazilian manufacturing firms were used for empirical testing and discussion of the dynamic capabilities framework. Regression and factor analyses validated the model; we discuss the results, contrasting with the dynamic capabilities’ framework. Operations Capability is the least dynamic of all capabilities, with the least influence on innovation. This reinforces the notion that operations capabilities as “ordinary capabilities,” whereas management, development, and transaction capabilities better explain firms’ dynamics and innovation.

18. CRT--Cascade Routing Tool to define and visualize flow paths for grid-based watershed models

Science.gov (United States)

Henson, Wesley R.; Medina, Rose L.; Mayers, C. Justin; Niswonger, Richard G.; Regan, R.S.

2013-01-01

The U.S. Geological Survey Cascade Routing Tool (CRT) is a computer application for watershed models that include the coupled Groundwater and Surface-water FLOW model, GSFLOW, and the Precipitation-Runoff Modeling System (PRMS). CRT generates output to define cascading surface and shallow subsurface flow paths for grid-based model domains. CRT requires a land-surface elevation for each hydrologic response unit (HRU) of the model grid; these elevations can be derived from a Digital Elevation Model raster data set of the area containing the model domain. Additionally, a list is required of the HRUs containing streams, swales, lakes, and other cascade termination features along with indices that uniquely define these features. Cascade flow paths are determined from the altitudes of each HRU. Cascade paths can cross any of the four faces of an HRU to a stream or to a lake within or adjacent to an HRU. Cascades can terminate at a stream, lake, or HRU that has been designated as a watershed outflow location.

19. Straight line fitting and predictions: On a marginal likelihood approach to linear regression and errors-in-variables models

Science.gov (United States)

Christiansen, Bo

2015-04-01

Linear regression methods are without doubt the most used approaches to describe and predict data in the physical sciences. They are often good first order approximations and they are in general easier to apply and interpret than more advanced methods. However, even the properties of univariate regression can lead to debate over the appropriateness of various models as witnessed by the recent discussion about climate reconstruction methods. Before linear regression is applied important choices have to be made regarding the origins of the noise terms and regarding which of the two variables under consideration that should be treated as the independent variable. These decisions are often not easy to make but they may have a considerable impact on the results. We seek to give a unified probabilistic - Bayesian with flat priors - treatment of univariate linear regression and prediction by taking, as starting point, the general errors-in-variables model (Christiansen, J. Clim., 27, 2014-2031, 2014). Other versions of linear regression can be obtained as limits of this model. We derive the likelihood of the model parameters and predictands of the general errors-in-variables model by marginalizing over the nuisance parameters. The resulting likelihood is relatively simple and easy to analyze and calculate. The well known unidentifiability of the errors-in-variables model is manifested as the absence of a well-defined maximum in the likelihood. However, this does not mean that probabilistic inference can not be made; the marginal likelihoods of model parameters and the predictands have, in general, well-defined maxima. We also include a probabilistic version of classical calibration and show how it is related to the errors-in-variables model. The results are illustrated by an example from the coupling between the lower stratosphere and the troposphere in the Northern Hemisphere winter.

20. Phosphoproteomics-based modeling defines the regulatory mechanism underlying aberrant EGFR signaling.

Directory of Open Access Journals (Sweden)

Shinya Tasaki

Full Text Available BACKGROUND: Mutation of the epidermal growth factor receptor (EGFR results in a discordant cell signaling, leading to the development of various diseases. However, the mechanism underlying the alteration of downstream signaling due to such mutation has not yet been completely understood at the system level. Here, we report a phosphoproteomics-based methodology for characterizing the regulatory mechanism underlying aberrant EGFR signaling using computational network modeling. METHODOLOGY/PRINCIPAL FINDINGS: Our phosphoproteomic analysis of the mutation at tyrosine 992 (Y992, one of the multifunctional docking sites of EGFR, revealed network-wide effects of the mutation on EGF signaling in a time-resolved manner. Computational modeling based on the temporal activation profiles enabled us to not only rediscover already-known protein interactions with Y992 and internalization property of mutated EGFR but also further gain model-driven insights into the effect of cellular content and the regulation of EGFR degradation. Our kinetic model also suggested critical reactions facilitating the reconstruction of the diverse effects of the mutation on phosphoproteome dynamics. CONCLUSIONS/SIGNIFICANCE: Our integrative approach provided a mechanistic description of the disorders of mutated EGFR signaling networks, which could facilitate the development of a systematic strategy toward controlling disease-related cell signaling.

1. Examples of EOS Variables as compared to the UMM-Var Data Model

Science.gov (United States)

Cantrell, Simon; Lynnes, Chris

2016-01-01

In effort to provide EOSDIS clients a way to discover and use variable data from different providers, a Unified Metadata Model for Variables is being created. This presentation gives an overview of the model and use cases we are handling.

2. Modelling carbon and nitrogen turnover in variably saturated soils

Science.gov (United States)

Batlle-Aguilar, J.; Brovelli, A.; Porporato, A.; Barry, D. A.

2009-04-01

Natural ecosystems provide services such as ameliorating the impacts of deleterious human activities on both surface and groundwater. For example, several studies have shown that a healthy riparian ecosystem can reduce the nutrient loading of agricultural wastewater, thus protecting the receiving surface water body. As a result, in order to develop better protection strategies and/or restore natural conditions, there is a growing interest in understanding ecosystem functioning, including feedbacks and nonlinearities. Biogeochemical transformations in soils are heavily influenced by microbial decomposition of soil organic matter. Carbon and nutrient cycles are in turn strongly sensitive to environmental conditions, and primarily to soil moisture and temperature. These two physical variables affect the reaction rates of almost all soil biogeochemical transformations, including microbial and fungal activity, nutrient uptake and release from plants, etc. Soil water saturation and temperature are not constants, but vary both in space and time, thus further complicating the picture. In order to interpret field experiments and elucidate the different mechanisms taking place, numerical tools are beneficial. In this work we developed a 3D numerical reactive-transport model as an aid in the investigation the complex physical, chemical and biological interactions occurring in soils. The new code couples the USGS models (MODFLOW 2000-VSF, MT3DMS and PHREEQC) using an operator-splitting algorithm, and is a further development an existing reactive/density-dependent flow model PHWAT. The model was tested using simplified test cases. Following verification, a process-based biogeochemical reaction network describing the turnover of carbon and nitrogen in soils was implemented. Using this tool, we investigated the coupled effect of moisture content and temperature fluctuations on nitrogen and organic matter cycling in the riparian zone, in order to help understand the relative

3. Defining New Therapeutics Using a More Immunocompetent Mouse Model of Antibody-Enhanced Dengue Virus Infection.

Science.gov (United States)

Pinto, Amelia K; Brien, James D; Lam, Chia-Ying Kao; Johnson, Syd; Chiang, Cindy; Hiscott, John; Sarathy, Vanessa V; Barrett, Alan D; Shresta, Sujan; Diamond, Michael S

2015-09-15

With over 3.5 billion people at risk and approximately 390 million human infections per year, dengue virus (DENV) disease strains health care resources worldwide. Previously, we and others established models for DENV pathogenesis in mice that completely lack subunits of the receptors (Ifnar and Ifngr) for type I and type II interferon (IFN) signaling; however, the utility of these models is limited by the pleotropic effect of these cytokines on innate and adaptive immune system development and function. Here, we demonstrate that the specific deletion of Ifnar expression on subsets of murine myeloid cells (LysM Cre(+) Ifnar(flox/flox) [denoted as Ifnar(f/f) herein]) resulted in enhanced DENV replication in vivo. The administration of subneutralizing amounts of cross-reactive anti-DENV monoclonal antibodies to LysM Cre(+) Ifnar(f/f) mice prior to infection with DENV serotype 2 or 3 resulted in antibody-dependent enhancement (ADE) of infection with many of the characteristics associated with severe DENV disease in humans, including plasma leakage, hypercytokinemia, liver injury, hemoconcentration, and thrombocytopenia. Notably, the pathogenesis of severe DENV-2 or DENV-3 infection in LysM Cre(+) Ifnar(f/f) mice was blocked by pre- or postexposure administration of a bispecific dual-affinity retargeting molecule (DART) or an optimized RIG-I receptor agonist that stimulates innate immune responses. Our findings establish a more immunocompetent animal model of ADE of infection with multiple DENV serotypes in which disease is inhibited by treatment with broad-spectrum antibody derivatives or innate immune stimulatory agents. Although dengue virus (DENV) infects hundreds of millions of people annually and results in morbidity and mortality on a global scale, there are no approved antiviral treatments or vaccines. Part of the difficulty in evaluating therapeutic candidates is the lack of small animal models that are permissive to DENV and recapitulate the clinical features

4. Regression mixture models : Does modeling the covariance between independent variables and latent classes improve the results?

NARCIS (Netherlands)

Lamont, A.E.; Vermunt, J.K.; Van Horn, M.L.

2016-01-01

Regression mixture models are increasingly used as an exploratory approach to identify heterogeneity in the effects of a predictor on an outcome. In this simulation study, we tested the effects of violating an implicit assumption often made in these models; that is, independent variables in the

5. To define climate politics: role of uncertainties and lessons of economic modelling

International Nuclear Information System (INIS)

Fortin, E.

2004-12-01

After an overview of the state-of-the-art of scientific knowledge on the climate change phenomenon considered according to its three components (climates, damages and socio-economy) and a focus on the nature and extent of scientific uncertainties (a typology of these is presented), this research presents and analyses the results of technico-economic models dealing with the Kyoto protocol's implementation costs. It aims at determining economical stakes related to action, at looking for the most efficient intervention ways. It analyses the results of bottom-up and top-down models, tries to identify robustness and uncertainties by using the previously introduced uncertainty typology. It presents and analyses long term scenarios, and highlights the role of energy systems in the determination of emissions. Finally, the author presents various categories of instruments which policy makers can use to implement a mitigation policy

6. Field and Model Study to Define Baseline Conditions of Beached Oil Tar Balls along Florida’s First Coast

OpenAIRE

Peter Bacopoulos; James David Lambert; Mary Hertz; Luis Montoya; Terry Smith

2014-01-01

Anecdotal data are currently the best data available to describe baseline conditions of beached oil tar balls on Florida’s First Coast beaches. This study combines field methods and numerical modeling to define a data-driven knowledge base of oil tar ball baseline conditions. Outcomes from the field study include an established methodology for field data collection and laboratory testing of beached oil tar balls, spatial maps of collected samples and analysis of the data as to transport/wash-...

7. Defining the Meaning of a Major Modeling and Simulation Change as Applied to Accreditation

Science.gov (United States)

2012-12-12

intelligent life ) × (fraction of intelligent civilizations that conduct extraterrestrial communication). Similarly, the QQR formula is a product of...requires) V&V over the system life -cycle. Unlike many other V&V methodologies, this one has a particular focus on the user (of the software and/or...modeling effort up to a fully integrated systems engineering activity that is conducted throughout the full life -cycle of any system (software or hardware

8. Improved variable reduction in partial least squares modelling by Global-Minimum Error Uninformative-Variable Elimination.

Science.gov (United States)

Andries, Jan P M; Vander Heyden, Yvan; Buydens, Lutgarde M C

2017-08-22

The calibration performance of Partial Least Squares regression (PLS) can be improved by eliminating uninformative variables. For PLS, many variable elimination methods have been developed. One is the Uninformative-Variable Elimination for PLS (UVE-PLS). However, the number of variables retained by UVE-PLS is usually still large. In UVE-PLS, variable elimination is repeated as long as the root mean squared error of cross validation (RMSECV) is decreasing. The set of variables in this first local minimum is retained. In this paper, a modification of UVE-PLS is proposed and investigated, in which UVE is repeated until no further reduction in variables is possible, followed by a search for the global RMSECV minimum. The method is called Global-Minimum Error Uninformative-Variable Elimination for PLS, denoted as GME-UVE-PLS or simply GME-UVE. After each iteration, the predictive ability of the PLS model, built with the remaining variable set, is assessed by RMSECV. The variable set with the global RMSECV minimum is then finally selected. The goal is to obtain smaller sets of variables with similar or improved predictability than those from the classical UVE-PLS method. The performance of the GME-UVE-PLS method is investigated using four data sets, i.e. a simulated set, NIR and NMR spectra, and a theoretical molecular descriptors set, resulting in twelve profile-response (X-y) calibrations. The selective and predictive performances of the models resulting from GME-UVE-PLS are statistically compared to those from UVE-PLS and 1-step UVE, one-sided paired t-tests. The results demonstrate that variable reduction with the proposed GME-UVE-PLS method, usually eliminates significantly more variables than the classical UVE-PLS, while the predictive abilities of the resulting models are better. With GME-UVE-PLS, a lower number of uninformative variables, without a chemical meaning for the response, may be retained than with UVE-PLS. The selectivity of the classical UVE method

9. a modified intervention model for gross domestic product variable

African Journals Online (AJOL)

the economy. He continued that the excess money made from other sectors can be invested in agriculture so as to get a diversified economy .... The effects of these exogenous variables showed that if the exogenous variable and intervention variables are brought under control, same goes for the inflationary process as well.

10. White dwarf models of supernovae and cataclysmic variables

International Nuclear Information System (INIS)

Nomoto, K.; Hashimoto, M.

1986-01-01

If the accreting white dwarf increases its mass to the Chandrasekhar mass, it will either explode as a Type I supernova or collapse to form a neutron star. In fact, there is a good agreement between the exploding white dwarf model for Type I supernovae and observations. We describe various types of evolution of accreting white dwarfs as a function of binary parameters (i.e,. composition, mass, and age of the white dwarf, its companion star, and mass accretion rate), and discuss the conditions for the precursors of exploding or collapsing white dwarfs, and their relevance to cataclysmic variables. Particular attention is given to helium star cataclysmics which might be the precursors of some Type I supernovae or ultrashort period x-ray binaries. Finally we present new evolutionary calculations using the updated nuclear reaction rates for the formation of O+Ne+Mg white dwarfs, and discuss the composition structure and their relevance to the model for neon novae. 61 refs., 14 figs

11. Learning atomic human actions using variable-length Markov models.

Science.gov (United States)

Liang, Yu-Ming; Shih, Sheng-Wen; Shih, Arthur Chun-Chieh; Liao, Hong-Yuan Mark; Lin, Cheng-Chung

2009-02-01

Visual analysis of human behavior has generated considerable interest in the field of computer vision because of its wide spectrum of potential applications. Human behavior can be segmented into atomic actions, each of which indicates a basic and complete movement. Learning and recognizing atomic human actions are essential to human behavior analysis. In this paper, we propose a framework for handling this task using variable-length Markov models (VLMMs). The framework is comprised of the following two modules: a posture labeling module and a VLMM atomic action learning and recognition module. First, a posture template selection algorithm, based on a modified shape context matching technique, is developed. The selected posture templates form a codebook that is used to convert input posture sequences into discrete symbol sequences for subsequent processing. Then, the VLMM technique is applied to learn the training symbol sequences of atomic actions. Finally, the constructed VLMMs are transformed into hidden Markov models (HMMs) for recognizing input atomic actions. This approach combines the advantages of the excellent learning function of a VLMM and the fault-tolerant recognition ability of an HMM. Experiments on realistic data demonstrate the efficacy of the proposed system.

12. A Correlation-Based Transition Model using Local Variables. Part 1; Model Formation

Science.gov (United States)

Menter, F. R.; Langtry, R. B.; Likki, S. R.; Suzen, Y. B.; Huang, P. G.; Volker, S.

2006-01-01

A new correlation-based transition model has been developed, which is based strictly on local variables. As a result, the transition model is compatible with modern computational fluid dynamics (CFD) approaches, such as unstructured grids and massive parallel execution. The model is based on two transport equations, one for intermittency and one for the transition onset criteria in terms of momentum thickness Reynolds number. The proposed transport equations do not attempt to model the physics of the transition process (unlike, e.g., turbulence models) but from a framework for the implementation of correlation-based models into general-purpose CFD methods.

13. Mutations and modeling of the chromatin remodeler CHD8 define an emerging autism etiology

Directory of Open Access Journals (Sweden)

Rebecca A Barnard

2015-12-01

Full Text Available Autism Spectrum Disorder (ASD is a common neurodevelopmental disorder with a strong but complex genetic component. Recent family based exome-sequencing strategies have identified recurrent de novo mutations at specific genes, providing strong evidence for ASD risk, but also highlighting the extreme genetic heterogeneity of the disorder. However, disruptions in these genes converge on key molecular pathways early in development. In particular, functional enrichment analyses have found that there is a bias towards genes involved in transcriptional regulation, such as chromatin regulators. Here we review recent genetic, animal model, co-expression network, and functional genomics studies relating to the high confidence ASD risk gene, CHD8. CHD8 a chromatin remodeling factor, may serve as a master regulator of a common ASD etiology. Individuals with a CHD8 mutation show an ASD subtype that includes similar physical characteristics, such as macrocephaly and prolonged GI problems including recurrent constipation. Similarly, animal models of CHD8 disruption exhibit enlarged head circumference and reduced gut motility phenotypes. Systems biology approaches suggest CHD8 and other candidate ASD risk genes are enriched during mid-fetal development, which may represent a critical time window in ASD etiology. Transcription profiles from cell and primary tissue models of early development indicate that CHD8 may also positively regulate other candidate ASD risk genes through both direct and indirect means. However continued study is needed to elucidate the mechanism of regulation as well as identify which CHD8 targets are most relevant to ASD risk. Overall, these initial studies suggest the potential for common ASD etiologies and the development of personalized treatments in the future.

14. Defining the next generation modeling of coastal ecotone dynamics in response to global change

Science.gov (United States)

Jiang, Jiang; DeAngelis, Donald L.; Teh, Su-Y; Krauss, Ken W.; Wang, Hongqing; Haidong, Li; Smith, Thomas; Koh, Hock L.

2016-01-01

Coastal ecosystems are especially vulnerable to global change; e.g., sea level rise (SLR) and extreme events. Over the past century, global change has resulted in salt-tolerant (halophytic) plant species migrating into upland salt-intolerant (glycophytic) dominated habitats along major rivers and large wetland expanses along the coast. While habitat transitions can be abrupt, modeling the specific drivers of abrupt change between halophytic and glycophytic vegetation is not a simple task. Correlative studies, which dominate the literature, are unlikely to establish ultimate causation for habitat shifts, and do not generate strong predictive capacity for coastal land managers and climate change adaptation exercises. In this paper, we first review possible drivers of ecotone shifts for coastal wetlands, our understanding of which has expanded rapidly in recent years. Any exogenous factor that increases growth or establishment of halophytic species will favor the ecotone boundary moving upslope. However, internal feedbacks between vegetation and the environment, through which vegetation modifies the local microhabitat (e.g., by changing salinity or surface elevation), can either help the system become resilient to future changes or strengthen ecotone migration. Following this idea, we review a succession of models that have provided progressively better insight into the relative importance of internal positive feedbacks versus external environmental factors. We end with developing a theoretical model to show that both abrupt environmental gradients and internal positive feedbacks can generate the sharp ecotonal boundaries that we commonly see, and we demonstrate that the responses to gradual global change (e.g., SLR) can be quite diverse.

15. Defining New Therapeutics Using a More Immunocompetent Mouse Model of Antibody-Enhanced Dengue Virus Infection

Science.gov (United States)

Pinto, Amelia K.; Brien, James D.; Lam, Chia-Ying Kao; Johnson, Syd; Chiang, Cindy; Hiscott, John; Sarathy, Vanessa V.; Barrett, Alan D.; Shresta, Sujan

2015-01-01

ABSTRACT With over 3.5 billion people at risk and approximately 390 million human infections per year, dengue virus (DENV) disease strains health care resources worldwide. Previously, we and others established models for DENV pathogenesis in mice that completely lack subunits of the receptors (Ifnar and Ifngr) for type I and type II interferon (IFN) signaling; however, the utility of these models is limited by the pleotropic effect of these cytokines on innate and adaptive immune system development and function. Here, we demonstrate that the specific deletion of Ifnar expression on subsets of murine myeloid cells (LysM Cre+ Ifnarflox/flox [denoted as Ifnarf/f herein]) resulted in enhanced DENV replication in vivo. The administration of subneutralizing amounts of cross-reactive anti-DENV monoclonal antibodies to LysM Cre+ Ifnarf/f mice prior to infection with DENV serotype 2 or 3 resulted in antibody-dependent enhancement (ADE) of infection with many of the characteristics associated with severe DENV disease in humans, including plasma leakage, hypercytokinemia, liver injury, hemoconcentration, and thrombocytopenia. Notably, the pathogenesis of severe DENV-2 or DENV-3 infection in LysM Cre+ Ifnarf/f mice was blocked by pre- or postexposure administration of a bispecific dual-affinity retargeting molecule (DART) or an optimized RIG-I receptor agonist that stimulates innate immune responses. Our findings establish a more immunocompetent animal model of ADE of infection with multiple DENV serotypes in which disease is inhibited by treatment with broad-spectrum antibody derivatives or innate immune stimulatory agents. PMID:26374123

16. Ecosystem Services Modeling as a Tool for Defining Priority Areas for Conservation.

Science.gov (United States)

Duarte, Gabriela Teixeira; Ribeiro, Milton Cezar; Paglia, Adriano Pereira

2016-01-01

Conservationists often have difficulty obtaining financial and social support for protected areas that do not demonstrate their benefits for society. Therefore, ecosystem services have gained importance in conservation science in the last decade, as these services provide further justification for appropriate management and conservation of natural systems. We used InVEST software and a set of GIS procedures to quantify, spatialize and evaluated the overlap between ecosystem services-carbon stock and sediment retention-and a biodiversity proxy-habitat quality. In addition, we proposed a method that serves as an initial approach of a priority areas selection process. The method considers the synergism between ecosystem services and biodiversity conservation. Our study region is the Iron Quadrangle, an important Brazilian mining province and a conservation priority area located in the interface of two biodiversity hotspots, the Cerrado and Atlantic Forest biomes. The resultant priority area for the maintenance of the highest values of ecosystem services and habitat quality was about 13% of the study area. Among those priority areas, 30% are already within established strictly protected areas, and 12% are in sustainable use protected areas. Following the transparent and highly replicable method we proposed in this study, conservation planners can better determine which areas fulfill multiple goals and can locate the trade-offs in the landscape. We also gave a step towards the improvement of the habitat quality model with a topography parameter. In areas of very rugged topography, we have to consider geomorfometric barriers for anthropogenic impacts and for species movement and we must think beyond the linear distances. Moreover, we used a model that considers the tree mortality caused by edge effects in the estimation of carbon stock. We found low spatial congruence among the modeled services, mostly because of the pattern of sediment retention distribution.

17. Ecosystem Services Modeling as a Tool for Defining Priority Areas for Conservation.

Directory of Open Access Journals (Sweden)

Gabriela Teixeira Duarte

Full Text Available Conservationists often have difficulty obtaining financial and social support for protected areas that do not demonstrate their benefits for society. Therefore, ecosystem services have gained importance in conservation science in the last decade, as these services provide further justification for appropriate management and conservation of natural systems. We used InVEST software and a set of GIS procedures to quantify, spatialize and evaluated the overlap between ecosystem services-carbon stock and sediment retention-and a biodiversity proxy-habitat quality. In addition, we proposed a method that serves as an initial approach of a priority areas selection process. The method considers the synergism between ecosystem services and biodiversity conservation. Our study region is the Iron Quadrangle, an important Brazilian mining province and a conservation priority area located in the interface of two biodiversity hotspots, the Cerrado and Atlantic Forest biomes. The resultant priority area for the maintenance of the highest values of ecosystem services and habitat quality was about 13% of the study area. Among those priority areas, 30% are already within established strictly protected areas, and 12% are in sustainable use protected areas. Following the transparent and highly replicable method we proposed in this study, conservation planners can better determine which areas fulfill multiple goals and can locate the trade-offs in the landscape. We also gave a step towards the improvement of the habitat quality model with a topography parameter. In areas of very rugged topography, we have to consider geomorfometric barriers for anthropogenic impacts and for species movement and we must think beyond the linear distances. Moreover, we used a model that considers the tree mortality caused by edge effects in the estimation of carbon stock. We found low spatial congruence among the modeled services, mostly because of the pattern of sediment retention

18. Defining dimensions of research readiness: a conceptual model for primary care research networks.

Science.gov (United States)

Carr, Helen; de Lusignan, Simon; Liyanage, Harshana; Liaw, Siaw-Teng; Terry, Amanda; Rafi, Imran

2014-11-26

19. Does internal variability change in response to global warming? A large ensemble modelling study of tropical rainfall

Science.gov (United States)

Milinski, S.; Bader, J.; Jungclaus, J. H.; Marotzke, J.

2017-12-01

There is some consensus on mean state changes of rainfall under global warming; changes of the internal variability, on the other hand, are more difficult to analyse and have not been discussed as much despite their importance for understanding changes in extreme events, such as droughts or floodings. We analyse changes in the rainfall variability in the tropical Atlantic region. We use a 100-member ensemble of historical (1850-2005) model simulations with the Max Planck Institute for Meteorology Earth System Model (MPI-ESM1) to identify changes of internal rainfall variability. To investigate the effects of global warming on the internal variability, we employ an additional ensemble of model simulations with stronger external forcing (1% CO2-increase per year, same integration length as the historical simulations) with 68 ensemble members. The focus of our study is on the oceanic Atlantic ITCZ. We find that the internal variability of rainfall over the tropical Atlantic does change due to global warming and that these changes in variability are larger than changes in the mean state in some regions. From splitting the total variance into patterns of variability, we see that the variability on the southern flank of the ITCZ becomes more dominant, i.e. explaining a larger fraction of the total variance in a warmer climate. In agreement with previous studies, we find that changes in the mean state show an increase and narrowing of the ITCZ. The large ensembles allow us to do a statistically robust differentiation between the changes in variability that can be explained by internal variability and those that can be attributed to the external forcing. Furthermore, we argue that internal variability in a transient climate is only well defined in the ensemble domain and not in the temporal domain, which requires the use of a large ensemble.

20. A Gross-Margin Model for Defining Technoeconomic Benchmarks in the Electroreduction of CO2.

Science.gov (United States)

Verma, Sumit; Kim, Byoungsu; Jhong, Huei-Ru Molly; Ma, Sichao; Kenis, Paul J A

2016-08-09

We introduce a gross-margin model to evaluate the technoeconomic feasibility of producing different C1 -C2 chemicals such as carbon monoxide, formic acid, methanol, methane, ethanol, and ethylene through the electroreduction of CO2 . Key performance benchmarks including the maximum operating cell potential (Vmax ), minimum operating current density (jmin ), Faradaic efficiency (FE), and catalyst durability (tcatdur ) are derived. The Vmax values obtained for the different chemicals indicate that CO and HCOOH are the most economically viable products. Selectivity requirements suggest that the coproduction of an economically less feasible chemical (CH3 OH, CH4 , C2 H5 OH, C2 H4 ) with a more feasible chemical (CO, HCOOH) can be a strategy to offset the Vmax requirements for individual products. Other performance requirements such as jmin and tcatdur are also derived, and the feasibility of alternative process designs and operating conditions are evaluated. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

1. Simulation of dynamic response of nuclear power plant based on user-defined model in PSASP

International Nuclear Information System (INIS)

Zhao Jie; Liu Dichen; Xiong Li; Chen Qi; Du Zhi; Lei Qingsheng

2010-01-01

Based on the energy transformation regularity in physical process of pressurized water reactors (PWR), PWR NPP models are established in PSASP (Power System Analysis Software Package), which are applicable for calculating the dynamic process of PWR NPP and power system transient stabilization. The power dynamic characteristics of PWR NPP is simulated and analyzed, including the PWR self-stability, self-regulation and power step responses under power regulation system. The results indicate that the PWR NPP can afford certain exterior disturbances and 10%P n step under temperature negative feedbacks. The regulate speed of PWR power can reach 5%P n /min under the power regulation system, which meets the requirement of peak regulation in Power Grid. (authors)

2. Defining immunological impact and therapeutic benefit of mild heating in a murine model of arthritis.

Directory of Open Access Journals (Sweden)

Chen-Ting Lee

Full Text Available Traditional treatments, including a variety of thermal therapies have been known since ancient times to provide relief from rheumatoid arthritis (RA symptoms. However, a general absence of information on how heating affects molecular or immunological targets relevant to RA has limited heat treatment (HT to the category of treatments known as "alternative therapies". In this study, we evaluated the effectiveness of mild HT in a collagen-induced arthritis (CIA model which has been used in many previous studies to evaluate newer pharmacological approaches for the treatment of RA, and tested whether inflammatory immune activity was altered. We also compared the effect of HT to methotrexate, a well characterized pharmacological treatment for RA. CIA mice were treated with either a single HT for several hours or daily 30 minute HT. Disease progression and macrophage infiltration were evaluated. We found that both HT regimens significantly reduced arthritis disease severity and macrophage infiltration into inflamed joints. Surprisingly, HT was as efficient as methotrexate in controlling disease progression. At the molecular level, HT suppressed TNF-α while increasing production of IL-10. We also observed an induction of HSP70 and a reduction in both NF-κB and HIF-1α in inflamed tissues. Additionally, using activated macrophages in vitro, we found that HT reduced production of pro-inflammatory cytokines, an effect which is correlated to induction of HSF-1 and HSP70 and inhibition of NF-κB and STAT activation. Our findings demonstrate a significant therapeutic benefit of HT in controlling arthritis progression in a clinically relevant mouse model, with an efficacy similar to methotrexate. Mechanistically, HT targets highly relevant anti-inflammatory pathways which strongly support its increased study for use in clinical trials for RA.

3. High-frequency variability of extragalactic radio sources. II: A statistical multi-frequency model of variability

Science.gov (United States)

Magdziarz, P.; Machalski, J.

1993-08-01

The numerical model of extragalactic variability, proposed by Rys & Machalski (1990), is extended for multi-epoch and multi-frequency sampling of an imaginary population of variable sources. Variability observations gathered in Paper I of this series (cf. Introduction) are used to constrain free parameters of the model. The fits to the observations are satisfactory if the distributions of burst amplitude, duration, and recurrence time between consecutive bursts of radiation are frequency-dependent. The model shows how the characteristics of variability depends on the time-filter applied in observations. In particular we found that (1) the intrinsic amplitude A of the flux- density fluctuations varies with frequency as ν^0.41+/-0.14^, (2) the mean timescale of variability , characterizing the total population of variables, varies as ν^0.9+/-0.1^, and should increase from about 10-15 yr at 10.8 GHz to about 80-120 yr at 1.4 GHz. This behavior is explained by a loss of identity and dissolution of the burst in slowly decaying previous bursts, (3) the "intrinsic" fraction of variables (i.e. a fraction independent on the time filter applied) should increase (e.g. for sources with apparent fluctuation > 0.3, about five times) with increasing frequency from 1.4 to 10.8 GHz, (4) (i) the mean timescale observed in a sample of variables, ω^bar^, is shorter than that in the total population, and (ii) the observed fraction of variables is lower than the intrinsic one, if the time base of observations is shorter than (0.4-1.0) (depending on a number of sampling epochs).

4. Modeling and performance analysis for composite network–compute service provisioning in software-defined cloud environments

Directory of Open Access Journals (Sweden)

Qiang Duan

2015-08-01

Full Text Available The crucial role of networking in Cloud computing calls for a holistic vision of both networking and computing systems that leads to composite network–compute service provisioning. Software-Defined Network (SDN is a fundamental advancement in networking that enables network programmability. SDN and software-defined compute/storage systems form a Software-Defined Cloud Environment (SDCE that may greatly facilitate composite network–compute service provisioning to Cloud users. Therefore, networking and computing systems need to be modeled and analyzed as composite service provisioning systems in order to obtain thorough understanding about service performance in SDCEs. In this paper, a novel approach for modeling composite network–compute service capabilities and a technique for evaluating composite network–compute service performance are developed. The analytic method proposed in this paper is general and agnostic to service implementation technologies; thus is applicable to a wide variety of network–compute services in SDCEs. The results obtained in this paper provide useful guidelines for federated control and management of networking and computing resources to achieve Cloud service performance guarantees.

5. Theoretical Investigations of Well-Defined Graphene Nanostructures: Catalysis, Spectroscopy, and Development of Novel Fragment-Based Models

Science.gov (United States)

Noffke, Benjamin W.

Carbon materials have the potential to replace some precious metals in renewable energy applications. These materials are particularly attractive because of the elemental abundance and relatively low nuclear mass of carbon, implying economically feasible and lightweight materials. Targeted design of carbon materials is hindered by the lack of fundamental understanding that is required to tailor their properties for the desired application. However, most available synthetic methods to create carbon materials involve harsh conditions that limit the control of the resulting structure. Without a well-defined structure, the system is too complex and fundamental studies cannot be definitive. This work seeks to gain fundamental understanding through the development and application of efficient computational models for these systems, in conjunction with experiments performed on soluble, well-defined graphene nanostructures prepared by our group using a bottom-up synthetic approach. Theory is used to determine mechanistic details for well-defined carbon systems in applications of catalysis and electrochemical transformations. The resulting computational models do well to explain previous observations of carbon materials and provide suggestions for future directions. However, as the system size of the nanostructures gets larger, the computational cost can become prohibitive. To reduce the computational scaling of quantum chemical calculations, a new fragmentation scheme has been developed that addresses the challenges of fragmenting conjugated molecules. By selecting fragments that retain important structural characteristics in graphene, a more efficient method is achieved. The new method paves the way for an automated, systematic fragmentation scheme of graphene molecules.

6. A molecular modeling approach defines a new group of Nodulin 26-like aquaporins in plants

International Nuclear Information System (INIS)

Rouge, Pierre; Barre, Annick

2008-01-01

The three-dimensional models built for the Nod26-like aquaporins all exhibit the typical α-helical fold of other aquaporins containing the two ar/R and NPA constriction filters along the central water channel. Besides these structural homologies, they readily differ with respect to the amino acid residues forming the ar/R selective filter. According to these discrepancies in both the hydrophilicity and pore size of the ar/R filter, Nod26-like aquaporins can be distributed in three subgroups corresponding to NIP-1, NIP-II and a third subgroup of Nod26-like aquaporins exhibiting a highly hydrophilic and widely open filter. However, all Nod26-like aquaporins display a bipartite distribution of electrostatic charges along the water channel with an electropositive extracellular vestibular portion followed by an electronegative cytosolic vestibular portion. The specific transport of water, non-ionic solutes (glycerol, urea, ammoniac), ions (NH 4 + ) and gas (NH 3 ) across the Nod26-like obviously depends on the electrostatic and conformational properties of their central water channel

7. Defining Our Clinical Practice: The Identification of Genetic Counseling Outcomes Utilizing the Reciprocal Engagement Model.

Science.gov (United States)

Redlinger-Grosse, Krista; Veach, Patricia McCarthy; Cohen, Stephanie; LeRoy, Bonnie S; MacFarlane, Ian M; Zierhut, Heather

2016-04-01

The need for evidence-based medicine, including comparative effectiveness studies and patient-centered outcomes research, has become a major healthcare focus. To date, a comprehensive list of genetic counseling outcomes, as espoused by genetic counselors, has not been established and thus, identification of outcomes unique to genetic counseling services has become a priority for the National Society of Genetic Counselors (NSGC). The purpose of this study was to take a critical first step at identifying a more comprehensive list of genetic counseling outcomes. This paper describes the results of a focus group study using the Reciprocal-Engagement Model (REM) as a framework to characterize patient-centered outcomes of genetic counseling clinical practice. Five focus groups were conducted with 27 peer nominated participants who were clinical genetic counselors, genetic counseling program directors, and/or outcomes researchers in genetic counseling. Members of each focus group were asked to identify genetic counseling outcomes for four to five of the 17 goals of the REM. A theory-driven, thematic analysis of focus group data yielded 194 genetic counseling outcomes across the 17 goals. Participants noted some concerns about how genetic counseling outcomes will be measured and evaluated given varying stakeholders and the long-term nature of genetic concerns. The present results provide a list of outcomes for use in future genetic counseling outcomes research and for empirically-supported clinical interventions.

8. A molecular modeling approach defines a new group of Nodulin 26-like aquaporins in plants.

Science.gov (United States)

Rougé, Pierre; Barre, Annick

2008-02-29

The three-dimensional models built for the Nod26-like aquaporins all exhibit the typical alpha-helical fold of other aquaporins containing the two ar/R and NPA constriction filters along the central water channel. Besides these structural homologies, they readily differ with respect to the amino acid residues forming the ar/R selective filter. According to these discrepancies in both the hydrophilicity and pore size of the ar/R filter, Nod26-like aquaporins can be distributed in three subgroups corresponding to NIP-1, NIP-II and a third subgroup of Nod26-like aquaporins exhibiting a highly hydrophilic and widely open filter. However, all Nod26-like aquaporins display a bipartite distribution of electrostatic charges along the water channel with an electropositive extracellular vestibular portion followed by an electronegative cytosolic vestibular portion. The specific transport of water, non-ionic solutes (glycerol, urea, ammoniac), ions (NH4+) and gas (NH(3)) across the Nod26-like obviously depends on the electrostatic and conformational properties of their central water channel.

9. A well-defined model system for the chromium-catalyzed selective oligomerization of ethylene.

Science.gov (United States)

Monillas, Wesley H; Young, John F; Yap, Glenn P A; Theopold, Klaus H

2013-07-07

The chromium(I) dinitrogen complex [(i-Pr2Ph)2nacnacCr]2(μ-η(2):η(2)-N2) catalyzes the selective trimerization of ethylene to 1-hexene at ambient pressure and temperature, and in the absence of any cocatalyst. After the conversion of the substrate, the catalyst cleanly converts to another chromium(I) species, namely [(i-Pr2Ph)2nacnacCr]2(μ-η(2):η(2)-C2H4), which is not catalytically active. Binuclear metallacycles containing Cr(II) have been prepared as candidates for catalytically active intermediates; however they are not kinetically competent to explain the catalysis. Turning thus to mononuclear metallacycles featuring Cr(III), a chromacyclopentane, chromacyclopentene and chromacyclopentadiene have been prepared as models of catalytic intermediates. Of these, the latter also catalyzes the trimerization of ethylene. These results support the proposal that selective ethylene oligomerization catalysis involves an interplay between Cr(I) ethylene complexes and mononuclear Cr(III) metallacycles.

10. Chemical Atmosphere-Snow-Sea Ice Interactions: defining future research in the field, lab and modeling

Science.gov (United States)

Frey, Markus

2015-04-01

The air-snow-sea ice system plays an important role in the global cycling of nitrogen, halogens, trace metals or carbon, including greenhouse gases (e.g. CO2 air-sea flux), and therefore influences also climate. Its impact on atmospheric composition is illustrated for example by dramatic ozone and mercury depletion events which occur within or close to the sea ice zone (SIZ) mostly during polar spring and are catalysed by halogens released from SIZ ice, snow or aerosol. Recent field campaigns in the high Arctic (e.g. BROMEX, OASIS) and Antarctic (Weddell sea cruises) highlight the importance of snow on sea ice as a chemical reservoir and reactor, even during polar night. However, many processes, participating chemical species and their interactions are still poorly understood and/or lack any representation in current models. Furthermore, recent lab studies provide a lot of detail on the chemical environment and processes but need to be integrated much better to improve our understanding of a rapidly changing natural environment. During a 3-day workshop held in Cambridge/UK in October 2013 more than 60 scientists from 15 countries who work on the physics, chemistry or biology of the atmosphere-snow-sea ice system discussed research status and challenges, which need to be addressed in the near future. In this presentation I will give a summary of the main research questions identified during this workshop as well as ways forward to answer them through a community-based interdisciplinary approach.

11. PHT3D-UZF: A Reactive Transport Model for Variably-Saturated Porous Media.

Science.gov (United States)

Wu, Ming Zhi; Post, Vincent E A; Salmon, S Ursula; Morway, Eric D; Prommer, Henning

2016-01-01

A modified version of the MODFLOW/MT3DMS-based reactive transport model PHT3D was developed to extend current reactive transport capabilities to the variably-saturated component of the subsurface system and incorporate diffusive reactive transport of gaseous species. Referred to as PHT3D-UZF, this code incorporates flux terms calculated by MODFLOW's unsaturated-zone flow (UZF1) package. A volume-averaged approach similar to the method used in UZF-MT3DMS was adopted. The PHREEQC-based computation of chemical processes within PHT3D-UZF in combination with the analytical solution method of UZF1 allows for comprehensive reactive transport investigations (i.e., biogeochemical transformations) that jointly involve saturated and unsaturated zone processes. Intended for regional-scale applications, UZF1 simulates downward-only flux within the unsaturated zone. The model was tested by comparing simulation results with those of existing numerical models. The comparison was performed for several benchmark problems that cover a range of important hydrological and reactive transport processes. A 2D simulation scenario was defined to illustrate the geochemical evolution following dewatering in a sandy acid sulfate soil environment. Other potential applications include the simulation of biogeochemical processes in variably-saturated systems that track the transport and fate of agricultural pollutants, nutrients, natural and xenobiotic organic compounds and micropollutants such as pharmaceuticals, as well as the evolution of isotope patterns. © 2015, National Ground Water Association.

12. Defining ATM-Independent Functions of the Mre11 Complex with a Novel Mouse Model.

Science.gov (United States)

Balestrini, Alessia; Nicolas, Laura; Yang-Lott, Katherine; Guryanova, Olga A; Levine, Ross L; Bassing, Craig H; Chaudhuri, Jayanta; Petrini, John H J

2016-02-01

The Mre11 complex (Mre11, Rad50, and Nbs1) occupies a central node of the DNA damage response (DDR) network and is required for ATM activation in response to DNA damage. Hypomorphic alleles of MRE11 and NBS1 confer embryonic lethality in ATM-deficient mice, indicating that the complex exerts ATM-independent functions that are essential when ATM is absent. To delineate those functions, a conditional ATM allele (ATM(flox)) was crossed to hypomorphic NBS1 mutants (Nbs1(ΔB/ΔB) mice). Nbs1(ΔB/ΔB) Atm(-/-) hematopoietic cells derived by crossing to vav(cre) were viable in vivo. Nbs1(ΔB/ΔB) Atm(-/-) (VAV) mice exhibited a pronounced defect in double-strand break repair and completely penetrant early onset lymphomagenesis. In addition to repair defects observed, fragile site instability was noted, indicating that the Mre11 complex promotes genome stability upon replication stress in vivo. The data suggest combined influences of the Mre11 complex on DNA repair, as well as the responses to DNA damage and DNA replication stress. A novel mouse model was developed, by combining a vav(cre)-inducible ATM knockout mouse with an NBS1 hypomorphic mutation, to analyze ATM-independent functions of the Mre11 complex in vivo. These data show that the DNA repair, rather than DDR signaling functions of the complex, is acutely required in the context of ATM deficiency to suppress genome instability and lymphomagenesis. ©2015 American Association for Cancer Research.

13. Transient Kinetics Define a Complete Kinetic Model for Protein Arginine Methyltransferase 1*

Science.gov (United States)

Hu, Hao; Luo, Cheng; Zheng, Y. George

2016-01-01

Protein arginine methyltransferases (PRMTs) are the enzymes responsible for posttranslational methylation of protein arginine residues in eukaryotic cells, particularly within the histone tails. A detailed mechanistic model of PRMT-catalyzed methylation is currently lacking, but it is essential for understanding the functions of PRMTs in various cellular pathways and for efficient design of PRMT inhibitors as potential treatments for a range of human diseases. In this work, we used stopped-flow fluorescence in combination with global kinetic simulation to dissect the transient kinetics of PRMT1, the predominant type I arginine methyltransferase. Several important mechanistic insights were revealed. The cofactor and the peptide substrate bound to PRMT1 in a random manner and then followed a kinetically preferred pathway to generate the catalytic enzyme-cofactor-substrate ternary complex. Product release proceeded in an ordered fashion, with peptide dissociation followed by release of the byproduct S-adenosylhomocysteine. Importantly, the dissociation rate of the monomethylated intermediate from the ternary complex was much faster than the methyl transfer. Such a result provided direct evidence for distributive arginine dimethylation, which means the monomethylated substrate has to be released to solution and rebind with PRMT1 before it undergoes further methylation. In addition, cofactor binding involved a conformational transition, likely an open-to-closed conversion of the active site pocket. Further, the histone H4 peptide bound to the two active sites of the PRMT1 homodimer with differential affinities, suggesting a negative cooperativity mechanism of substrate binding. These findings provide a new mechanistic understanding of how PRMTs interact with their substrates and transfer methyl groups. PMID:27834681

14. Modeling Variable Phanerozoic Oxygen Effects on Physiology and Evolution.

Science.gov (United States)

Graham, Jeffrey B; Jew, Corey J; Wegner, Nicholas C

2016-01-01

Geochemical approximation of Earth's atmospheric O2 level over geologic time prompts hypotheses linking hyper- and hypoxic atmospheres to transformative events in the evolutionary history of the biosphere. Such correlations, however, remain problematic due to the relative imprecision of the timing and scope of oxygen change and the looseness of its overlay on the chronology of key biotic events such as radiations, evolutionary innovation, and extinctions. There are nevertheless general attributions of atmospheric oxygen concentration to key evolutionary changes among groups having a primary dependence upon oxygen diffusion for respiration. These include the occurrence of Devonian hypoxia and the accentuation of air-breathing dependence leading to the origin of vertebrate terrestriality, the occurrence of Carboniferous-Permian hyperoxia and the major radiation of early tetrapods and the origins of insect flight and gigantism, and the Mid-Late Permian oxygen decline accompanying the Permian extinction. However, because of variability between and error within different atmospheric models, there is little basis for postulating correlations outside the Late Paleozoic. Other problems arising in the correlation of paleo-oxygen with significant biological events include tendencies to ignore the role of blood pigment affinity modulation in maintaining homeostasis, the slow rates of O2 change that would have allowed for adaptation, and significant respiratory and circulatory modifications that can and do occur without changes in atmospheric oxygen. The purpose of this paper is thus to refocus thinking about basic questions central to the biological and physiological implications of O2 change over geological time.

15. Variable Width Riparian Model Enhances Landscape and Watershed Condition

Science.gov (United States)

Abood, S. A.; Spencer, L.

2017-12-01

Riparian areas are ecotones that represent about 1% of USFS administered landscape and contribute to numerous valuable ecosystem functions such as wildlife habitat, stream water quality and flows, bank stability and protection against erosion, and values related to diversity, aesthetics and recreation. Riparian zones capture the transitional area between terrestrial and aquatic ecosystems with specific vegetation and soil characteristics which provide critical values/functions and are very responsive to changes in land management activities and uses. Two staff areas at the US Forest Service have coordinated on a two phase project to support the National Forests in their planning revision efforts and to address rangeland riparian business needs at the Forest Plan and Allotment Management Plan levels. The first part of the project will include a national fine scale (USGS HUC-12 digits watersheds) inventory of riparian areas on National Forest Service lands in western United States with riparian land cover, utilizing GIS capabilities and open source geospatial data. The second part of the project will include the application of riparian land cover change and assessment based on selected indicators to assess and monitor riparian areas on annual/5-year cycle basis.This approach recognizes the dynamic and transitional nature of riparian areas by accounting for hydrologic, geomorphic and vegetation data as inputs into the delineation process. The results suggest that incorporating functional variable width riparian mapping within watershed management planning can improve riparian protection and restoration. The application of Riparian Buffer Delineation Model (RBDM) approach can provide the agency Watershed Condition Framework (WCF) with observed riparian area condition on an annual basis and on multiple scales. The use of this model to map moderate to low gradient systems of sufficient width in conjunction with an understanding of the influence of distinctive landscape

16. Variable Renewable Energy in Long-Term Planning Models: A Multi-Model Perspective

Energy Technology Data Exchange (ETDEWEB)

Cole, Wesley [National Renewable Energy Lab. (NREL), Golden, CO (United States); Frew, Bethany [National Renewable Energy Lab. (NREL), Golden, CO (United States); Mai, Trieu [National Renewable Energy Lab. (NREL), Golden, CO (United States); Sun, Yinong [National Renewable Energy Lab. (NREL), Golden, CO (United States); Bistline, John [Electric Power Research Inst. (EPRI), Knoxville, TN (United States); Blanford, Geoffrey [Electric Power Research Inst. (EPRI), Knoxville, TN (United States); Young, David [Electric Power Research Inst. (EPRI), Knoxville, TN (United States); Marcy, Cara [U.S. Energy Information Administration, Washington, DC (United States); Namovicz, Chris [U.S. Energy Information Administration, Washington, DC (United States); Edelman, Risa [US Environmental Protection Agency (EPA), Washington, DC (United States); Meroney, Bill [US Environmental Protection Agency (EPA), Washington, DC (United States); Sims, Ryan [US Environmental Protection Agency (EPA), Washington, DC (United States); Stenhouse, Jeb [US Environmental Protection Agency (EPA), Washington, DC (United States); Donohoo-Vallett, Paul [Dept. of Energy (DOE), Washington DC (United States)

2017-11-01

Long-term capacity expansion models of the U.S. electricity sector have long been used to inform electric sector stakeholders and decision-makers. With the recent surge in variable renewable energy (VRE) generators — primarily wind and solar photovoltaics — the need to appropriately represent VRE generators in these long-term models has increased. VRE generators are especially difficult to represent for a variety of reasons, including their variability, uncertainty, and spatial diversity. This report summarizes the analyses and model experiments that were conducted as part of two workshops on modeling VRE for national-scale capacity expansion models. It discusses the various methods for treating VRE among four modeling teams from the Electric Power Research Institute (EPRI), the U.S. Energy Information Administration (EIA), the U.S. Environmental Protection Agency (EPA), and the National Renewable Energy Laboratory (NREL). The report reviews the findings from the two workshops and emphasizes the areas where there is still need for additional research and development on analysis tools to incorporate VRE into long-term planning and decision-making. This research is intended to inform the energy modeling community on the modeling of variable renewable resources, and is not intended to advocate for or against any particular energy technologies, resources, or policies.

17. Define Project

DEFF Research Database (Denmark)

2005-01-01

"Project" is a key concept in IS management. The word is frequently used in textbooks and standards. Yet we seldom find a precise definition of the concept. This paper discusses how to define the concept of a project. The proposed definition covers both heavily formalized projects and informally...

18. Stratified flows with variable density: mathematical modelling and numerical challenges.

Science.gov (United States)

2017-04-01

Stratified flows appear in a wide variety of fundamental problems in hydrological and geophysical sciences. They may involve from hyperconcentrated floods carrying sediment causing collapse, landslides and debris flows, to suspended material in turbidity currents where turbulence is a key process. Also, in stratified flows variable horizontal density is present. Depending on the case, density varies according to the volumetric concentration of different components or species that can represent transported or suspended materials or soluble substances. Multilayer approaches based on the shallow water equations provide suitable models but are not free from difficulties when moving to the numerical resolution of the governing equations. Considering the variety of temporal and spatial scales, transfer of mass and energy among layers may strongly differ from one case to another. As a consequence, in order to provide accurate solutions, very high order methods of proved quality are demanded. Under these complex scenarios it is necessary to observe that the numerical solution provides the expected order of accuracy but also converges to the physically based solution, which is not an easy task. To this purpose, this work will focus in the use of Energy balanced augmented solvers, in particular, the Augmented Roe Flux ADER scheme. References: J. Murillo , P. García-Navarro, Wave Riemann description of friction terms in unsteady shallow flows: Application to water and mud/debris floods. J. Comput. Phys. 231 (2012) 1963-2001. J. Murillo B. Latorre, P. García-Navarro. A Riemann solver for unsteady computation of 2D shallow flows with variable density. J. Comput. Phys.231 (2012) 4775-4807. A. Navas-Montilla, J. Murillo, Energy balanced numerical schemes with very high order. The Augmented Roe Flux ADER scheme. Application to the shallow water equations, J. Comput. Phys. 290 (2015) 188-218. A. Navas-Montilla, J. Murillo, Asymptotically and exactly energy balanced augmented flux

19. Application of soft computing based hybrid models in hydrological variables modeling: a comprehensive review

Science.gov (United States)

Fahimi, Farzad; Yaseen, Zaher Mundher; El-shafie, Ahmed

2017-05-01

Since the middle of the twentieth century, artificial intelligence (AI) models have been used widely in engineering and science problems. Water resource variable modeling and prediction are the most challenging issues in water engineering. Artificial neural network (ANN) is a common approach used to tackle this problem by using viable and efficient models. Numerous ANN models have been successfully developed to achieve more accurate results. In the current review, different ANN models in water resource applications and hydrological variable predictions are reviewed and outlined. In addition, recent hybrid models and their structures, input preprocessing, and optimization techniques are discussed and the results are compared with similar previous studies. Moreover, to achieve a comprehensive view of the literature, many articles that applied ANN models together with other techniques are included. Consequently, coupling procedure, model evaluation, and performance comparison of hybrid models with conventional ANN models are assessed, as well as, taxonomy and hybrid ANN models structures. Finally, current challenges and recommendations for future researches are indicated and new hybrid approaches are proposed.

20. Estimating net present value variability for deterministic models

NARCIS (Netherlands)

van Groenendaal, W.J.H.

1995-01-01

For decision makers the variability in the net present value (NPV) of an investment project is an indication of the project's risk. So-called risk analysis is one way to estimate this variability. However, risk analysis requires knowledge about the stochastic character of the inputs. For large,

1. Surgical outcomes of laparoscopic hysterectomy with concomitant endometriosis without bowel or bladder dissection : A cohort analysis to define a case-mix variable

NARCIS (Netherlands)

Sandberg, Evelien M.; Driessen, Sara R C; Bak, Evelien A.T.; van Geloven, Nan; Berger, Judith P.; Smeets, Mathilde J.G.H.; Rhemrev, Johann P T; Jansen, F.W.

2018-01-01

Background: Pelvic endometriosis is often mentioned as one of the variables influencing surgical outcomes of laparoscopic hysterectomy (LH). However, its additional surgical risks have not been well established. The aim of this study was to analyze to what extent concomitant endometriosis

2. A landscape model for predicting potential natural vegetation of the Olympic Peninsula USA using boundary equations and newly developed environmental variables.

Science.gov (United States)

Jan A. Henderson; Robin D. Lesher; David H. Peter; Chris D. Ringo

2011-01-01

A gradient-analysis-based model and grid-based map are presented that use the potential vegetation zone as the object of the model. Several new variables are presented that describe the environmental gradients of the landscape at different scales. Boundary algorithms are conceptualized, and then defined, that describe the environmental boundaries between vegetation...

3. Psychology defined.

Science.gov (United States)

Henriques, Gregg R

2004-12-01

A new form of knowledge technology is used to diagnose psychology's epistemological woes and provide a solution to the difficulties. The argument presented is that psychology has traditionally spanned two separate but intimately related problems: (a) the problem of animal behavior and (b) the problem of human behavior. Accordingly, the solution offered divides the field into two broad, logically consistent domains. The first domain is psychological formalism, which is defined as the science of mind, corresponds to animal behavior, and consists of the basic psychological sciences. The second domain is human psychology, which is defined as the science of human behavior at the individual level and is proposed as a hybrid that exists between psychological formalism and the social sciences. 2004 Wiley Periodicals, Inc.

4. Virtual optical network provisioning with unified service logic processing model for software-defined multidomain optical networks

Science.gov (United States)

Zhao, Yongli; Li, Shikun; Song, Yinan; Sun, Ji; Zhang, Jie

2015-12-01

Hierarchical control architecture is designed for software-defined multidomain optical networks (SD-MDONs), and a unified service logic processing model (USLPM) is first proposed for various applications. USLPM-based virtual optical network (VON) provisioning process is designed, and two VON mapping algorithms are proposed: random node selection and per controller computation (RNS&PCC) and balanced node selection and hierarchical controller computation (BNS&HCC). Then an SD-MDON testbed is built with OpenFlow extension in order to support optical transport equipment. Finally, VON provisioning service is experimentally demonstrated on the testbed along with performance verification.

5. Models that predict standing crop of stream fish from habitat variables: 1950-85.

Science.gov (United States)

K.D. Fausch; C.L. Hawkes; M.G. Parsons

1988-01-01

We reviewed mathematical models that predict standing crop of stream fish (number or biomass per unit area or length of stream) from measurable habitat variables and classified them by the types of independent habitat variables found significant, by mathematical structure, and by model quality. Habitat variables were of three types and were measured on different scales...

6. On the ""early-time"" evolution of variables relevant to turbulence models for Rayleigh-Taylor instability

Energy Technology Data Exchange (ETDEWEB)

Rollin, Bertrand [Los Alamos National Laboratory; Andrews, Malcolm J [Los Alamos National Laboratory

2010-01-01

We present our progress toward setting initial conditions in variable density turbulence models. In particular, we concentrate our efforts on the BHR turbulence model for turbulent Rayleigh-Taylor instability. Our approach is to predict profiles of relevant parameters before the fully turbulent regime and use them as initial conditions for the turbulence model. We use an idealized model of the mixing between two interpenetrating fluids to define the initial profiles for the turbulence model parameters. Velocities and volume fractions used in the idealized mixing model are obtained respectively from a set of ordinary differential equations modeling the growth of the Rayleigh-Taylor instability and from an idealization of the density profile in the mixing layer. A comparison between predicted initial profiles for the turbulence model parameters and initial profiles of the parameters obtained from low Atwood number three dimensional simulations show reasonable agreement.

7. Approaches for modeling within subject variability in pharmacometric count data analysis: dynamic inter-occasion variability and stochastic differential equations.

Science.gov (United States)

Deng, Chenhui; Plan, Elodie L; Karlsson, Mats O

2016-06-01

Parameter variation in pharmacometric analysis studies can be characterized as within subject parameter variability (WSV) in pharmacometric models. WSV has previously been successfully modeled using inter-occasion variability (IOV), but also stochastic differential equations (SDEs). In this study, two approaches, dynamic inter-occasion variability (dIOV) and adapted stochastic differential equations, were proposed to investigate WSV in pharmacometric count data analysis. These approaches were applied to published count models for seizure counts and Likert pain scores. Both approaches improved the model fits significantly. In addition, stochastic simulation and estimation were used to explore further the capability of the two approaches to diagnose and improve models where existing WSV is not recognized. The results of simulations confirmed the gain in introducing WSV as dIOV and SDEs when parameters vary randomly over time. Further, the approaches were also informative as diagnostics of model misspecification, when parameters changed systematically over time but this was not recognized in the structural model. The proposed approaches in this study offer strategies to characterize WSV and are not restricted to count data.

8. Next-Generation Model-based Variability Management: Languages and Tools

OpenAIRE

Acher , Mathieu; Heymans , Patrick; Collet , Philippe; Lahire , Philippe

2012-01-01

International audience; Variability modelling and management is a key activity in a growing number of software engineering contexts, from software product lines to dynamic adaptive systems. Feature models are the defacto standard to formally represent and reason about commonality and variability of a software system. This tutorial aims at presenting next generation of feature modelling languages and tools, directly applicable to a wide range of model-based variability problems and application...

9. Exact solutions to a nonlinear dispersive model with variable coefficients

International Nuclear Information System (INIS)

Yin Jun; Lai Shaoyong; Qing Yin

2009-01-01

A mathematical technique based on an auxiliary differential equation and the symbolic computation system Maple is employed to investigate a prototypical and nonlinear K(n, n) equation with variable coefficients. The exact solutions to the equation are constructed analytically under various circumstances. It is shown that the variable coefficients and the exponent appearing in the equation determine the quantitative change in the physical structures of the solutions.

10. Modelling for Fuel Optimal Control of a Variable Compression Engine

OpenAIRE

Nilsson, Ylva

2007-01-01

Variable compression engines are a mean to meet the demand on lower fuel consumption. A high compression ratio results in high engine efficiency, but also increases the knock tendency. On conventional engines with fixed compression ratio, knock is avoided by retarding the ignition angle. The variable compression engine offers an extra dimension in knock control, since both ignition angle and compression ratio can be adjusted. The central question is thus for what combination of compression ra...

11. Modeling and designing of variable-period and variable-pole-number undulator

Directory of Open Access Journals (Sweden)

I. Davidyuk

2016-02-01

Full Text Available The concept of permanent-magnet variable-period undulator (VPU was proposed several years ago and has found few implementations so far. The VPUs have some advantages as compared with conventional undulators, e.g., a wider range of radiation wavelength tuning and the option to increase the number of poles for shorter periods. Both these advantages will be realized in the VPU under development now at Budker INP. In this paper, we present the results of 2D and 3D magnetic field simulations and discuss some design features of this VPU.

12. Functional physiology of the human terminal antrum defined by high-resolution electrical mapping and computational modeling.

Science.gov (United States)

Berry, Rachel; Miyagawa, Taimei; Paskaranandavadivel, Niranchan; Du, Peng; Angeli, Timothy R; Trew, Mark L; Windsor, John A; Imai, Yohsuke; O'Grady, Gregory; Cheng, Leo K

2016-11-01

High-resolution (HR) mapping has been used to study gastric slow-wave activation; however, the specific characteristics of antral electrophysiology remain poorly defined. This study applied HR mapping and computational modeling to define functional human antral physiology. HR mapping was performed in 10 subjects using flexible electrode arrays (128-192 electrodes; 16-24 cm 2 ) arranged from the pylorus to mid-corpus. Anatomical registration was by photographs and anatomical landmarks. Slow-wave parameters were computed, and resultant data were incorporated into a computational fluid dynamics (CFD) model of gastric flow to calculate impact on gastric mixing. In all subjects, extracellular mapping demonstrated normal aboral slow-wave propagation and a region of increased amplitude and velocity in the prepyloric antrum. On average, the high-velocity region commenced 28 mm proximal to the pylorus, and activation ceased 6 mm from the pylorus. Within this region, velocity increased 0.2 mm/s per mm of tissue, from the mean 3.3 ± 0.1 mm/s to 7.5 ± 0.6 mm/s (P human terminal antral contraction is controlled by a short region of rapid high-amplitude slow-wave activity. Distal antral wave acceleration plays a major role in antral flow and mixing, increasing particle strain and trituration. Copyright © 2016 the American Physiological Society.

13. Defining social inclusion of people with intellectual and developmental disabilities: an ecological model of social networks and community participation.

Science.gov (United States)

Simplican, Stacy Clifford; Leader, Geraldine; Kosciulek, John; Leahy, Michael

2015-03-01

Social inclusion is an important goal for people with intellectual and developmental disabilities, families, service providers, and policymakers; however, the concept of social inclusion remains unclear, largely due to multiple and conflicting definitions in research and policy. We define social inclusion as the interaction between two major life domains: interpersonal relationships and community participation. We then propose an ecological model of social inclusion that includes individual, interpersonal, organizational, community, and socio-political factors. We identify four areas of research that our ecological model of social inclusion can move forward: (1) organizational implementation of social inclusion; (2) social inclusion of people with intellectual and developmental disabilities living with their families, (3) social inclusion of people along a broader spectrum of disability, and (4) the potential role of self-advocacy organizations in promoting social inclusion. Copyright © 2014. Published by Elsevier Ltd.

14. Intercomparison of model response and internal variability across climate model ensembles

Science.gov (United States)

Kumar, Devashish; Ganguly, Auroop R.

2017-10-01

Characterization of climate uncertainty at regional scales over near-term planning horizons (0-30 years) is crucial for climate adaptation. Climate internal variability (CIV) dominates climate uncertainty over decadal prediction horizons at stakeholders' scales (regional to local). In the literature, CIV has been characterized indirectly using projections of climate change from multi-model ensembles (MME) instead of directly using projections from multiple initial condition ensembles (MICE), primarily because adequate number of initial condition (IC) runs were not available for any climate model. Nevertheless, the recent availability of significant number of IC runs from one climate model allows for the first time to characterize CIV directly from climate model projections and perform a sensitivity analysis to study the dominance of CIV compared to model response variability (MRV). Here, we measure relative agreement (a dimensionless number with values ranging between 0 and 1, inclusive; a high value indicates less variability and vice versa) among MME and MICE and find that CIV is lower than MRV for all projection time horizons and spatial resolutions for precipitation and temperature. However, CIV exhibits greater dominance over MRV for seasonal and annual mean precipitation at higher latitudes where signals of climate change are expected to emerge sooner. Furthermore, precipitation exhibits large uncertainties and a rapid decline in relative agreement from global to continental, regional, or local scales for MICE compared to MME. The fractional contribution of uncertainty due to CIV is invariant for precipitation and decreases for temperature as lead time progresses towards the end of the century.

15. Defining Cyberbullying.

Science.gov (United States)

Englander, Elizabeth; Donnerstein, Edward; Kowalski, Robin; Lin, Carolyn A; Parti, Katalin

2017-11-01

Is cyberbullying essentially the same as bullying, or is it a qualitatively different activity? The lack of a consensual, nuanced definition has limited the field's ability to examine these issues. Evidence suggests that being a perpetrator of one is related to being a perpetrator of the other; furthermore, strong relationships can also be noted between being a victim of either type of attack. It also seems that both types of social cruelty have a psychological impact, although the effects of being cyberbullied may be worse than those of being bullied in a traditional sense (evidence here is by no means definitive). A complicating factor is that the 3 characteristics that define bullying (intent, repetition, and power imbalance) do not always translate well into digital behaviors. Qualities specific to digital environments often render cyberbullying and bullying different in circumstances, motivations, and outcomes. To make significant progress in addressing cyberbullying, certain key research questions need to be addressed. These are as follows: How can we define, distinguish between, and understand the nature of cyberbullying and other forms of digital conflict and cruelty, including online harassment and sexual harassment? Once we have a functional taxonomy of the different types of digital cruelty, what are the short- and long-term effects of exposure to or participation in these social behaviors? What are the idiosyncratic characteristics of digital communication that users can be taught? Finally, how can we apply this information to develop and evaluate effective prevention programs? Copyright © 2017 by the American Academy of Pediatrics.

16. A Logistic Based Mathematical Model to Optimize Duplicate Elimination Ratio in Content Defined Chunking Based Big Data Storage System

Directory of Open Access Journals (Sweden)

Longxiang Wang

2016-07-01

Full Text Available Deduplication is an efficient data reduction technique, and it is used to mitigate the problem of huge data volume in big data storage systems. Content defined chunking (CDC is the most widely used algorithm in deduplication systems. The expected chunk size is an important parameter of CDC, and it influences the duplicate elimination ratio (DER significantly. We collected two realistic datasets to perform an experiment. The experimental results showed that the current approach of setting the expected chunk size to 4 KB or 8 KB empirically cannot optimize DER. Therefore, we present a logistic based mathematical model to reveal the hidden relationship between the expected chunk size and the DER. This model provides a theoretical basis for optimizing DER by setting the expected chunk size reasonably. We used the collected datasets to verify this model. The experimental results showed that the R2 values, which describe the goodness of fit, are above 0.9, validating the correctness of this mathematic model. Based on the DER model, we discussed how to make DER close to the optimum by setting the expected chunk size reasonably.

17. A Model of the Dynamic Error as a Measurement Result of Instruments Defining the Parameters of Moving Objects

Directory of Open Access Journals (Sweden)

Dichev D.

2014-08-01

Full Text Available The present paper considers a new model for the formation of the dynamic error inertial component. It is very effective in the analysis and synthesis of measuring instruments positioned on moving objects and measuring their movement parameters. The block diagram developed within this paper is used as a basis for defining the mathematical model. The block diagram is based on the set-theoretic description of the measuring system, its input and output quantities and the process of dynamic error formation. The model reflects the specific nature of the formation of the dynamic error inertial component. In addition, the model submits to the logical interrelation and sequence of the physical processes that form it. The effectiveness, usefulness and advantages of the model proposed are rooted in the wide range of possibilities it provides in relation to the analysis and synthesis of those measuring instruments, the formulation of algorithms and optimization criteria, as well as the development of new intelligent measuring systems with improved accuracy characteristics in dynamic mode.

18. Forecasting Macroeconomic Variables using Neural Network Models and Three Automated Model Selection Techniques

DEFF Research Database (Denmark)

Kock, Anders Bredahl; Teräsvirta, Timo

In this paper we consider the forecasting performance of a well-defined class of flexible models, the so-called single hidden-layer feedforward neural network models. A major aim of our study is to find out whether they, due to their flexibility, are as useful tools in economic forecasting as some...... previous studies have indicated. When forecasting with neural network models one faces several problems, all of which influence the accuracy of the forecasts. First, neural networks are often hard to estimate due to their highly nonlinear structure. In fact, their parameters are not even globally...... on the linearisation idea: the Marginal Bridge Estimator and Autometrics. Second, one must decide whether forecasting should be carried out recursively or directly. Comparisons of these two methodss exist for linear models and here these comparisons are extended to neural networks. Finally, a nonlinear model...

19. An individual-based model simulating goat response variability and long-term herd performance.

Science.gov (United States)

Puillet, L; Martin, O; Sauvant, D; Tichit, M

2010-12-01

Finding ways of increasing the efficiency of production systems is a key issue of sustainability. System efficiency is based on long-term individual efficiency, which is highly variable and management driven. To study the effects of management on herd and individual efficiency, we developed the model simulation of goat herd management (SIGHMA). This dynamic model is individual-based and represents the interactions between technical operations (relative to replacement, reproduction and feeding) and individual biological processes (performance dynamics based on energy partitioning and production potential). It simulates outputs at both herd and goat levels over 20 years. A farmer's production project (i.e. a targeted milk production pattern) is represented by configuring the herd into female groups reflecting the organisation of kidding periods. Each group is managed by discrete events applying decision rules to simulate the carrying out of technical operations. The animal level is represented by a set of individual goat models. Each model simulates a goat's biological dynamics through its productive life. It integrates the variability of biological responses driven by genetic scaling parameters (milk production potential and mature body weight), by the regulations of energy partitioning among physiological functions and by responses to diet energy defined by the feeding strategy. A sensitivity analysis shows that herd efficiency was mainly affected by feeding management and to a lesser extent by the herd production potential. The same effects were observed on herd milk feed costs with an even lower difference between production potential and feeding management. SIGHMA was used in a virtual experiment to observe the effects of feeding strategies on herd and individual performances. We found that overfeeding led to a herd production increase and a feed cost decrease. However, this apparent increase in efficiency at the herd level (as feed cost decreased) was related

20. Modelling and Multi-Variable Control of Refrigeration Systems

DEFF Research Database (Denmark)

Larsen, Lars Finn Slot; Holm, J. R.

2003-01-01

In this paper a dynamic model of a 1:1 refrigeration system is presented. The main modelling effort has been concentrated on a lumped parameter model of a shell and tube condenser. The model has shown good resemblance with experimental data from a test rig, regarding as well the static as the dyn......In this paper a dynamic model of a 1:1 refrigeration system is presented. The main modelling effort has been concentrated on a lumped parameter model of a shell and tube condenser. The model has shown good resemblance with experimental data from a test rig, regarding as well the static...

1. A comparison of elastic-plastic and variable modulus-cracking constitutive models for prestressed concrete reactor vessels

International Nuclear Information System (INIS)

Anderson, C.A.; Smith, P.D.

1979-01-01

Numerical prediction of the behavior of prestressed concrete reactor vessels (PCRVs) under static, dynamic and long term loadings is complicated by the currently ill-defined behavior of concrete under stress and the three-dimensional nature of PCRVs. Which constitutive model most closely approximates the behavior of concrete in PCRVs under load has not yet been decided. Many equations for accurately modeling the three-dimensional behavior of PCRVs tax the capability of a most up-to-date computing system. The main purpose of this paper is to compare the characteristics of two constitutive models which have been proposed for concrete, variable modulus cracking model and elastic-plastic model. Moreover, the behavior of typical concrete structures was compared, the materials of which obey these constitutive laws. The response to internal pressure of PCRV structure, the constitutive models for concrete, the test problems using a thick-walled concrete ring and a rectangular concrete plate, and the analysis of an axisymmetric concrete pressure vessel PV-26 using the variable modulus cracking model of the ADINA code are explained. The variable modulus cracking model can predict the behavior of reinforced concrete structures well into the range of nonlinear behavior. (Kako, I.)

2. Variable Renewable Energy in Long-Term Planning Models: A Multi-Model Perspective

Energy Technology Data Exchange (ETDEWEB)

Cole, Wesley J. [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Frew, Bethany A. [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Mai, Trieu T. [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Sun, Yinong [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Bistline, John [Electric Power Research Inst., Palo Alto, CA (United States); Blanford, Geoffrey [Electric Power Research Inst., Palo Alto, CA (United States); Young, David [Electric Power Research Inst., Palo Alto, CA (United States); Marcy, Cara [Energy Information Administration, Washington, DC (United States); Namovicz, Chris [Energy Information Administration, Washington, DC (United States); Edelman, Risa [Environmental Protection Agency, Washington, DC (United States); Meroney, Bill [Environmental Protection Agency; Sims, Ryan [Environmental Protection Agency; Stenhouse, Jeb [Environmental Protection Agency; Donohoo-Vallett, Paul [U.S. Department of Energy

2017-11-03

Long-term capacity expansion models of the U.S. electricity sector have long been used to inform electric sector stakeholders and decision makers. With the recent surge in variable renewable energy (VRE) generators - primarily wind and solar photovoltaics - the need to appropriately represent VRE generators in these long-term models has increased. VRE generators are especially difficult to represent for a variety of reasons, including their variability, uncertainty, and spatial diversity. To assess current best practices, share methods and data, and identify future research needs for VRE representation in capacity expansion models, four capacity expansion modeling teams from the Electric Power Research Institute, the U.S. Energy Information Administration, the U.S. Environmental Protection Agency, and the National Renewable Energy Laboratory conducted two workshops of VRE modeling for national-scale capacity expansion models. The workshops covered a wide range of VRE topics, including transmission and VRE resource data, VRE capacity value, dispatch and operational modeling, distributed generation, and temporal and spatial resolution. The objectives of the workshops were both to better understand these topics and to improve the representation of VRE across the suite of models. Given these goals, each team incorporated model updates and performed additional analyses between the first and second workshops. This report summarizes the analyses and model 'experiments' that were conducted as part of these workshops as well as the various methods for treating VRE among the four modeling teams. The report also reviews the findings and learnings from the two workshops. We emphasize the areas where there is still need for additional research and development on analysis tools to incorporate VRE into long-term planning and decision-making.

3. The numerical model of multi-layer insulation with a defined wrapping pattern immersed in superfluid helium

Science.gov (United States)

Malecha, Ziemowit; Lubryka, Eliza

2017-11-01

The numerical model of thin layers, characterized by a defined wrapping pattern can be a crucial element of many computational problems related to engineering and science. A motivating example is found in multilayer electrical insulation, which is an important component of superconducting magnets and other cryogenic installations. The wrapping pattern of the insulation can significantly affect heat transport and the performance of the considered instruments. The major objective of this study is to develop the numerical boundary conditions (BC) needed to model the wrapping pattern of thin insulation. An example of the practical application of the proposed BC includes the heat transfer of Rutherford NbTi cables immersed in super-fluid helium (He II) across thin layers of electrical insulation. The proposed BC and a mathematical model of heat transfer in He II are implemented in the open source CFD toolbox OpenFOAM. The implemented mathematical model and the BC are compared in the experiments. The study confirms that the thermal resistance of electrical insulation can be lowered by implementing the proper wrapping pattern. The proposed BC can be useful in the study of new patterns for wrapping schemes. The work has been supported by statutory funds from Polish Ministry for Science and Higher Education for the year of 2017.

4. Modeling Extreme Precipitation over East China with a Global Variable-Resolution Modeling Framework (MPAS)

Science.gov (United States)

Zhao, C.; Xu, M.; Wang, Y.; Guo, J.; Hu, Z.; Ruby, L.; Duda, M.; Skamarock, W. C.

2017-12-01

Modeling extreme precipitation requires high-resolution scales. Traditional regional downscaling modeling framework has some issues such as ill-posed boundary conditions, mismatches between the driving global and regional dynamics and physics, and the lack of regional feedback to global scales. The non-hydrostatic Model for Prediction Across Scales (MPAS), a global variable-resolution modeling framework, offers an opportunity to obtain regional features at high-resolution scales using regional mesh refinement without boundary limiting. In this study, the MPAS model is first time applied with the refined meshes over East China at various high-resolutions (16 km and 4 km) to simulate an extreme precipitation event during 26-27 June 2012. The simulations are evaluated with the ground observations from the Chinese Meteorological Administration (CMA) network and the reanalysis data. Sensitivity experiments with different physics and forecast lead time are conducted to understand the uncertainties in simulating spatial and temporal variation of precipitation. The variable-resolution simulations are also compared with the traditional global uniform-resolution simulations at a relatively low scale ( 30 km) and a relatively high scale ( 16 km). The analysis shows that the variable-resolution simulation can capture the high-scale feature of precipitation over East China as the uniform-resolution simulation at a relatively high scale. It also indicates that high-resolution significantly improves the capability of simulating extreme precipitation. The MPAS simulations are also compared with the traditional limited-area simulations at similar scales using the Weather Research and Forecasting Model (WRF). The difference between the simulations using these two different modeling framework is also discussed.

5. Nanostructured model implants for in vivo studies: influence of well-defined nanotopography on de novo bone formation on titanium implants

Science.gov (United States)

Ballo, Ahmed; Agheli, Hossein; Lausmaa, Jukka; Thomsen, Peter; Petronis, Sarunas

2011-01-01

An implantable model system was developed to investigate the effects of nanoscale surface properties on the osseointegration of titanium implants in rat tibia. Topographical nanostructures with a well-defined shape (semispherical protrusions) and variable size (60 nm, 120 nm and 220 nm) were produced by colloidal lithography on the machined implants. Furthermore, the implants were sputter-coated with titanium to ensure a uniform surface chemical composition. The histological evaluation of bone around the implants at 7 days and 28 days after implantation was performed on the ground sections using optical and scanning electron microscopy. Differences between groups were found mainly in the new bone formation process in the endosteal and marrow bone compartments after 28 days of implantation. Implant surfaces with 60 nm features demonstrated significantly higher bone-implant contact (BIC, 76%) compared with the 120 nm (45%) and control (57%) surfaces. This effect was correlated to the higher density and curvature of the 60 nm protrusions. Within the developed model system, nanoscale protrusions could be applied and systematically varied in size in the presence of microscale background roughness on complex screw-shaped implants. Moreover, the model can be adapted for the systematic variation of surface nanofeature density and chemistry, which opens up new possibilities for in vivo studies of various nanoscale surface-bone interactions. PMID:22267926

6. Multi-variable port Hamiltonian model of piezoelectric material

NARCIS (Netherlands)

Macchelli, A.; Macchelli, Alessandro; van der Schaft, Arjan; Melchiorri, Claudio

2004-01-01

In this paper, the dynamics of a piezoelectric material is presented within the new framework of multi-variable distributed port Hamiltonian systems. This class of infinite dimensional system is quite general, thus allowing the description of several physical phenomena, such as heat conduction,

7. Multi-variable Port Hamiltonian Model of Piezoelectric Material

NARCIS (Netherlands)

Macchelli, Alessandro; Schaft, Arjan J. van der; Melchiorri, Claudio

2004-01-01

In this paper, the dynamics of a piezoelectric material is presented within the new framework of multi-variable distributed port Hamiltonian systems. This class of infinite dimensional system is quite general, thus allowing the description of several physical phenomena, such as heat conduction,

8. Variability of four-dimensional computed tomography patient models

NARCIS (Netherlands)

Sonke, Jan-Jakob; Lebesque, Joos; van Herk, Marcel

2008-01-01

PURPOSE: To quantify the interfractional variability in lung tumor trajectory and mean position during the course of radiation therapy. METHODS AND MATERIALS: Repeat four-dimensional (4D) cone-beam computed tomography (CBCT) scans (median, nine scans/patient) routinely acquired during the course of

9. Modeling HIVAIDS Variables, A Case Of Contingency Analysis

African Journals Online (AJOL)

PROF. OLIVER OSUAGWA

2015-06-01

Jun 1, 2015 ... of affection. Also the conditional independence of the pair-wise variables of interest existed. Age is conditionally independent of both gender of the affected patient and the year of the affection. Moreover, gender ..... “Grieving in the Ethnic Literature Classroom,” College Literature. 18(3), 1–2. [31] Wilks, S. S. ...

10. Modelling of Hydropower Reservoir Variables for Energy Generation ...

African Journals Online (AJOL)

Efficient management of hydropower reservoir can only be realized when there is sufficient understanding of interactions existing between reservoir variables and energy generation. Reservoir inflow, storage, reservoir elevation, turbine release, net generating had, plant use coefficient, tail race level and evaporation losses ...

11. Model Criticism of Bayesian Networks with Latent Variables.

Science.gov (United States)

Williamson, David M.; Mislevy, Robert J.; Almond, Russell G.

This study investigated statistical methods for identifying errors in Bayesian networks (BN) with latent variables, as found in intelligent cognitive assessments. BN, commonly used in artificial intelligence systems, are promising mechanisms for scoring constructed-response examinations. The success of an intelligent assessment or tutoring system…

12. What is culture in «cultural economy»? Defining culture to create measurable models in cultural economy

Directory of Open Access Journals (Sweden)

Aníbal Monasterio Astobiza

2017-07-01

Full Text Available The idea of culture is somewhat vague and ambiguous for the formal goals of economics. The aim of this paper is to define the notion of culture better so as to help build economic explanations based on culture and therefore to measure its impact in every activity or beliefs associated with culture. To define culture according to the canonical evolutionary definition, it is any kind of ritualised behaviour that becomes meaningful for a group and that remains more or less constant and is transmitted down through the generations. Economic institutions are founded, implicitly or explicitly, on a worldview of how humans function; culture is an essential part of understanding us as humans, making it necessary to describe what we understand by culture correctly. In this paper we review the literature on evolutionary anthropology and psychology dealing with the concept of culture to warn that economic modelling ignores intangible benefits of culture rendering economics unable to measure certain cultural items in the digital consumer society.

13. Development of a minimal chemically defined medium for Ketogulonicigenium vulgare WSH001 based on its genome-scale metabolic model.

Science.gov (United States)

Fan, Shicun; Zhang, Zhenyu; Zou, Wei; Huang, Zheng; Liu, Jie; Liu, Liming

2014-01-01

Commercial production of 2-keto-l-gulonic acid (2-KLG), the immediate precursor of l-ascorbic acid, is by Ketogulonicigenium vulgare in co-culture with Bacillus megaterium. We used flux balance analysis (FBA) to study a genome-scale metabolic model (GSMM) of K. vulgare, iWZ663, and found that K. vulgare is deficient in nutrient biosynthetic pathways. Individually omitting l-glycine, l-cysteine, l-methionine, l-tryptophan, adenine, thymine, thiamine and pantothenate from complete chemically defined medium (CDM), caused biomass formation of K. vulgare to decrease to 1%, 21%, 16%, 1%, 26%, 57%, 73% and 24%, respectively. Based on these results and FBA, a minimal chemically defined medium (MCDM) was developed that supported monoculture of K. vulgare (0.28OD600) and 2-KLG production (3.59g/L), which were similar to those in complete CDM or corn steep liquor powder (CSLP) medium. This study demonstrated the potential of using GSMM and FBA to characterize nutrient requirements, optimize CDM, and study interactions in co-culture. Copyright © 2013. Published by Elsevier B.V.

14. A quantitative method for defining high-arched palate using the Tcof1+/− mutant mouse as a model

Science.gov (United States)

Conley, Zachary R.; Hague, Molly; Kurosaka, Hiroshi; Dixon, Jill; Dixon, Michael J.; Trainor, Paul A.

2016-01-01

The palate functions as the roof of the mouth in mammals, separating the oral and nasal cavities. Its complex embryonic development and assembly poses unique susceptibilities to intrinsic and extrinsic disruptions. Such disruptions may cause failure of the developing palatal shelves to fuse along the midline resulting in a cleft. In other cases the palate may fuse at an arch, resulting in a vaulted oral cavity, termed high-arched palate. There are many models available for studying the pathogenesis of cleft palate but a relative paucity for high-arched palate. One condition exhibiting either cleft palate or high-arched palate is Treacher Collins syndrome, a congenital disorder characterized by numerous craniofacial anomalies. We quantitatively analyzed palatal perturbations in the Tcof1+/− mouse model of Treacher Collins syndrome, which phenocopies the condition in humans. We discovered that 46% of Tcof1+/− mutant embryos and new born pups exhibit either soft clefts or full clefts. In addition, 17% of Tcof1+/− mutants were found to exhibit high-arched palate, defined as two sigma above the corresponding wild-type population mean for height and angular based arch measurements. Furthermore, palatal shelf length and shelf width were decreased in all Tcof1+/− mutant embryos and pups compared to controls. Interestingly, these phenotypes were subsequently ameliorated through genetic inhibition of p53. The results of our study therefore provide a simple, reproducible and quantitative method for investigating models of high-arched palate. PMID:26772999

15. A quantitative method for defining high-arched palate using the Tcof1(+/-) mutant mouse as a model.

Science.gov (United States)

Conley, Zachary R; Hague, Molly; Kurosaka, Hiroshi; Dixon, Jill; Dixon, Michael J; Trainor, Paul A

2016-07-15

The palate functions as the roof of the mouth in mammals, separating the oral and nasal cavities. Its complex embryonic development and assembly poses unique susceptibilities to intrinsic and extrinsic disruptions. Such disruptions may cause failure of the developing palatal shelves to fuse along the midline resulting in a cleft. In other cases the palate may fuse at an arch, resulting in a vaulted oral cavity, termed high-arched palate. There are many models available for studying the pathogenesis of cleft palate but a relative paucity for high-arched palate. One condition exhibiting either cleft palate or high-arched palate is Treacher Collins syndrome, a congenital disorder characterized by numerous craniofacial anomalies. We quantitatively analyzed palatal perturbations in the Tcof1(+/-) mouse model of Treacher Collins syndrome, which phenocopies the condition in humans. We discovered that 46% of Tcof1(+/-) mutant embryos and new born pups exhibit either soft clefts or full clefts. In addition, 17% of Tcof1(+/-) mutants were found to exhibit high-arched palate, defined as two sigma above the corresponding wild-type population mean for height and angular based arch measurements. Furthermore, palatal shelf length and shelf width were decreased in all Tcof1(+/-) mutant embryos and pups compared to controls. Interestingly, these phenotypes were subsequently ameliorated through genetic inhibition of p53. The results of our study therefore provide a simple, reproducible and quantitative method for investigating models of high-arched palate. Copyright © 2015 Elsevier Inc. All rights reserved.

16. Productively infected murine Kaposi's sarcoma-like tumors define new animal models for studying and targeting KSHV oncogenesis and replication.

Directory of Open Access Journals (Sweden)

Brittany M Ashlock

Full Text Available Kaposi's sarcoma (KS is an AIDS-defining cancer caused by the KS-associated herpesvirus (KSHV. KS tumors are composed of KSHV-infected spindle cells of vascular origin with aberrant neovascularization and erythrocyte extravasation. KSHV genes expressed during both latent and lytic replicative cycles play important roles in viral oncogenesis. Animal models able to recapitulate both viral and host biological characteristics of KS are needed to elucidate oncogenic mechanisms, for developing targeted therapies, and to trace cellular components of KS ontogeny. Herein, we describe two new murine models of Kaposi's sarcoma. We found that murine bone marrow-derived cells, whether established in culture or isolated from fresh murine bone marrow, were infectable with rKSHV.219, formed KS-like tumors in immunocompromised mice and produced mature herpesvirus-like virions in vivo. Further, we show in vivo that the histone deacetylase (HDAC inhibitor suberoylanilide hydroxamic acid (SAHA/Vorinostat enhanced viral lytic reactivation. We propose that these novel models are ideal for studying both viral and host contributions to KSHV-induced oncogenesis as well as for testing virally-targeted antitumor strategies for the treatment of Kaposi's sarcoma. Furthermore, our isolation of bone marrow-derived cell populations containing a cell type that, when infected with KSHV, renders a tumorigenic KS-like spindle cell, should facilitate systematic identification of KS progenitor cells.

17. Field and Model Study to Define Baseline Conditions of Beached Oil Tar Balls along Florida’s First Coast

Directory of Open Access Journals (Sweden)

Peter Bacopoulos

2014-03-01

Full Text Available Anecdotal data are currently the best data available to describe baseline conditions of beached oil tar balls on Florida’s First Coast beaches. This study combines field methods and numerical modeling to define a data-driven knowledge base of oil tar ball baseline conditions. Outcomes from the field study include an established methodology for field data collection and laboratory testing of beached oil tar balls, spatial maps of collected samples and analysis of the data as to transport/wash-up trends. Archives of the electronic data, including GPS locations and other informational tags, and collected samples are presented, as are the physical and chemical analyses of the collected samples. The thrust of the physical and chemical analyses is to differentiate the collected samples into highly suspect oil tar balls versus false/non-oil tar ball samples. The numerical modeling involves two-dimensional hydrodynamic simulations of astronomic tides. Results from the numerical modeling include velocity residuals that show ebb-dominated residual currents exiting the inlet via an offshore, counter-rotating dual-eddy system. The tidally derived residual currents are used as one explanation for the observed transport trends. The study concludes that the port activity in the St. Johns River is not majorly contributing to the baseline conditions of oil tar ball wash-up on Florida’s First Coast beaches.

18. Interdecadal variability in a hybrid coupled ocean-atmosphere-sea ice model

OpenAIRE

Kravtsov, S; Ghil, M

2004-01-01

Interdecadal climate variability in an idealized coupled ocean-atmosphere-sea-ice model is studied. The ocean component is a fully three-dimensional primitive equation model and the atmospheric component is a two-dimensional (2D) energy balance model of Budyko-Sellers-North type, while sea ice is represented by a 2D thermodynamic model. In a wide range of parameters the model climatology resembles certain aspects of observed climate. Two types of interdecadal variability are found. The first ...

19. Sparse modeling of spatial environmental variables associated with asthma

OpenAIRE

Chang, Timothy S.; Gangnon, Ronald E.; Page, C. David; Buckingham, William R.; Tandias, Aman; Cowan, Kelly J.; Tomasallo, Carrie D.; Arndt, Brian G.; Hanrahan, Lawrence P.; Guilbert, Theresa W.

2014-01-01

Geographically distributed environmental factors influence the burden of diseases such as asthma. Our objective was to identify sparse environmental variables associated with asthma diagnosis gathered from a large electronic health record (EHR) dataset while controlling for spatial variation. An EHR dataset from the University of Wisconsin’s Family Medicine, Internal Medicine and Pediatrics Departments was obtained for 199,220 patients aged 5–50 years over a three-year period. Each patient’s ...

20. Derivation and application of mathematical model for well test analysis with variable skin factor in hydrocarbon reservoirs

Science.gov (United States)

Liu, Pengcheng; Li, Wenhui; Xia, Jing; Jiao, Yuwei; Bie, Aifang

2016-06-01

Skin factor is often regarded as a constant in most of the mathematical model for well test analysis in oilfields, but this is only a kind of simplified treatment with the actual skin factor changeable. This paper defined the average permeability of a damaged area as a function of time by using the definition of skin factor. Therefore a relationship between a variable skin factor and time was established. The variable skin factor derived was introduced into existing traditional models rather than using a constant skin factor, then, this newly derived mathematical model for well test analysis considering variable skin factor was solved by Laplace transform. The dimensionless wellbore pressure and its derivative changed with dimensionless time were plotted with double logarithm and these plots can be used for type curve fitting. The effects of all the parameters in the expression of variable skin factor were analyzed based on the dimensionless wellbore pressure and its derivative. Finally, actual well testing data were used to fit the type curves developed which validates the applicability of the mathematical model from Sheng-2 Block, Shengli Oilfield, China.

1. Evaluation of maillard reaction variables and their effect on heterocyclic amine formation in chemical model systems.

Science.gov (United States)

Dennis, Cara; Karim, Faris; Smith, J Scott

2015-02-01

Heterocyclic amines (HCAs), highly mutagenic and potentially carcinogenic by-products, form during Maillard browning reactions, specifically in muscle-rich foods. Chemical model systems allow examination of in vitro formation of HCAs while eliminating complex matrices of meat. Limited research has evaluated the effects of Maillard reaction parameters on HCA formation. Therefore, 4 essential Maillard variables (precursors molar concentrations, water amount, sugar type, and sugar amounts) were evaluated to optimize a model system for the study of 4 HCAs: 2-amino-3-methylimidazo-[4,5-f]quinoline, 2-amino-3-methylimidazo[4,5-f]quinoxaline, 2-amino-3,8-dimethylimidazo[4,5-f]quinoxaline, and 2-amino-3,4,8-trimethyl-imidazo[4,5-f]quinoxaline. Model systems were dissolved in diethylene glycol, heated at 175 °C for 40 min, and separated using reversed-phase liquid chromatography. To define the model system, precursor amounts (threonine and creatinine) were adjusted in molar increments (0.2/0.2, 0.4/0.4, 0.6/0.6, and 0.8/0.8 mmol) and water amounts by percentage (0%, 5%, 10%, and 15%). Sugars (lactose, glucose, galactose, and fructose) were evaluated in several molar amounts proportional to threonine and creatinine (quarter, half, equi, and double). The precursor levels and amounts of sugar were significantly different (P < 0.05) in regards to total HCA formation, with 0.6/0.6/1.2 mmol producing higher levels. Water concentration and sugar type also had a significant effect (P < 0.05), with 5% water and lactose producing higher total HCA amounts. A model system containing threonine (0.6 mmol), creatinine (0.6 mmol), and glucose (1.2 mmol), with 15% water was determined to be the optimal model system with glucose and 15% water being a better representation of meat systems. © 2015 Institute of Food Technologists®

2. The application of an internal state variable model to the viscoplastic behavior of irradiated ASTM 304L stainless steel

Energy Technology Data Exchange (ETDEWEB)

McAnulty, Michael J., E-mail: mcanulmj@id.doe.gov [Department of Energy, 1955 Fremont Avenue, Idaho Falls, ID 83402 (United States); Potirniche, Gabriel P. [Mechanical Engineering Department, University of Idaho, Moscow, ID 83844 (United States); Tokuhiro, Akira [Mechanical Engineering Department, University of Idaho, Idaho Falls, ID 83402 (United States)

2012-09-15

3. The application of an internal state variable model to the viscoplastic behavior of irradiated ASTM 304L stainless steel

International Nuclear Information System (INIS)

McAnulty, Michael J.; Potirniche, Gabriel P.; Tokuhiro, Akira

2012-01-01

4. Defining optimal DEM resolutions and point densities for modelling hydrologically sensitive areas in agricultural catchments dominated by microtopography

Science.gov (United States)

Thomas, I. A.; Jordan, P.; Shine, O.; Fenton, O.; Mellander, P.-E.; Dunlop, P.; Murphy, P. N. C.

2017-02-01

Defining critical source areas (CSAs) of diffuse pollution in agricultural catchments depends upon the accurate delineation of hydrologically sensitive areas (HSAs) at highest risk of generating surface runoff pathways. In topographically complex landscapes, this delineation is constrained by digital elevation model (DEM) resolution and the influence of microtopographic features. To address this, optimal DEM resolutions and point densities for spatially modelling HSAs were investigated, for onward use in delineating CSAs. The surface runoff framework was modelled using the Topographic Wetness Index (TWI) and maps were derived from 0.25 m LiDAR DEMs (40 bare-earth points m-2), resampled 1 m and 2 m LiDAR DEMs, and a radar generated 5 m DEM. Furthermore, the resampled 1 m and 2 m LiDAR DEMs were regenerated with reduced bare-earth point densities (5, 2, 1, 0.5, 0.25 and 0.125 points m-2) to analyse effects on elevation accuracy and important microtopographic features. Results were compared to surface runoff field observations in two 10 km2 agricultural catchments for evaluation. Analysis showed that the accuracy of modelled HSAs using different thresholds (5%, 10% and 15% of the catchment area with the highest TWI values) was much higher using LiDAR data compared to the 5 m DEM (70-100% and 10-84%, respectively). This was attributed to the DEM capturing microtopographic features such as hedgerow banks, roads, tramlines and open agricultural drains, which acted as topographic barriers or channels that diverted runoff away from the hillslope scale flow direction. Furthermore, the identification of 'breakthrough' and 'delivery' points along runoff pathways where runoff and mobilised pollutants could be potentially transported between fields or delivered to the drainage channel network was much higher using LiDAR data compared to the 5 m DEM (75-100% and 0-100%, respectively). Optimal DEM resolutions of 1-2 m were identified for modelling HSAs, which balanced the need

5. Tropospheric Ozone Assessment Report: Assessment of global-scale model performance for global and regional ozone distributions, variability, and trends

Directory of Open Access Journals (Sweden)

P. J. Young

2018-01-01

Full Text Available The goal of the Tropospheric Ozone Assessment Report (TOAR is to provide the research community with an up-to-date scientific assessment of tropospheric ozone, from the surface to the tropopause. While a suite of observations provides significant information on the spatial and temporal distribution of tropospheric ozone, observational gaps make it necessary to use global atmospheric chemistry models to synthesize our understanding of the processes and variables that control tropospheric ozone abundance and its variability. Models facilitate the interpretation of the observations and allow us to make projections of future tropospheric ozone and trace gas distributions for different anthropogenic or natural perturbations. This paper assesses the skill of current-generation global atmospheric chemistry models in simulating the observed present-day tropospheric ozone distribution, variability, and trends. Drawing upon the results of recent international multi-model intercomparisons and using a range of model evaluation techniques, we demonstrate that global chemistry models are broadly skillful in capturing the spatio-temporal variations of tropospheric ozone over the seasonal cycle, for extreme pollution episodes, and changes over interannual to decadal periods. However, models are consistently biased high in the northern hemisphere and biased low in the southern hemisphere, throughout the depth of the troposphere, and are unable to replicate particular metrics that define the longer term trends in tropospheric ozone as derived from some background sites. When the models compare unfavorably against observations, we discuss the potential causes of model biases and propose directions for future developments, including improved evaluations that may be able to better diagnose the root cause of the model-observation disparity. Overall, model results should be approached critically, including determining whether the model performance is acceptable for

6. A Novel Information-Theoretic Approach for Variable Clustering and Predictive Modeling Using Dirichlet Process Mixtures

Science.gov (United States)

Chen, Yun; Yang, Hui

2016-12-01

In the era of big data, there are increasing interests on clustering variables for the minimization of data redundancy and the maximization of variable relevancy. Existing clustering methods, however, depend on nontrivial assumptions about the data structure. Note that nonlinear interdependence among variables poses significant challenges on the traditional framework of predictive modeling. In the present work, we reformulate the problem of variable clustering from an information theoretic perspective that does not require the assumption of data structure for the identification of nonlinear interdependence among variables. Specifically, we propose the use of mutual information to characterize and measure nonlinear correlation structures among variables. Further, we develop Dirichlet process (DP) models to cluster variables based on the mutual-information measures among variables. Finally, orthonormalized variables in each cluster are integrated with group elastic-net model to improve the performance of predictive modeling. Both simulation and real-world case studies showed that the proposed methodology not only effectively reveals the nonlinear interdependence structures among variables but also outperforms traditional variable clustering algorithms such as hierarchical clustering.

7. Models for turbulent flows with variable density and combustion

International Nuclear Information System (INIS)

Jones, W.P.

1980-01-01

Models for transport processes and combustion in turbulent flows are outlined with emphasis on the situation where the fuel and air are injected separately. Attention is restricted to relatively simple flames. The flows investigated are high Reynolds number, single-phase, turbulent high-temperature flames in which radiative heat transfer can be considered negligible. Attention is given to the lower order closure models, algebraic stress and flux models, the k-epsilon turbulence model, the diffusion flame approximation, and finite rate reaction mechanisms

8. Torque Modeling and Control of a Variable Compression Engine

OpenAIRE

Bergström, Andreas

2003-01-01

The SAAB variable compression engine is a new engine concept that enables the fuel consumption to be radically cut by varying the compression ratio. A challenge with this new engine concept is that the compression ratio has a direct influence on the output torque, which means that a change in compression ratio also leads to a change in the torque. A torque change may be felt as a jerk in the movement of the car, and this is an undesirable effect since the driver has no control over the compre...

9. Composite Pressure Vessel Variability in Geometry and Filament Winding Model

Science.gov (United States)

Green, Steven J.; Greene, Nathanael J.

2012-01-01

Composite pressure vessels (CPVs) are used in a variety of applications ranging from carbon dioxide canisters for paintball guns to life support and pressurant storage on the International Space Station. With widespread use, it is important to be able to evaluate the effect of variability on structural performance. Data analysis was completed on CPVs to determine the amount of variation that occurs among the same type of CPV, and a filament winding routine was developed to facilitate study of the effect of manufacturing variation on structural response.

10. The validity of transtheoretical model through different psychological variables

OpenAIRE

Morales Domínguez, Zaira Esther; Pascual Orts, Luis Miguel; Carmona Márquez, José

2010-01-01

El Modelo Transteórico es un modelo ampliamente utilizado para la explicación del cambio intencional, sobre todo cuando el cambio se refiere a conductas adictivas. A pesar de ello, también ha sido un modelo muy criticado, entre otros motivos por falta de validez. En este trabajo nos propusimos valorar la validez del propio modelo evaluando diferentes variables psicológicas distintas a los propios constructos del modelo: amplificación somatosensorial, hábitos de salud, actitu...

11. Defining the effect of sweep tillage tool cutting edge geometry on tillage forces using 3D discrete element modelling

Directory of Open Access Journals (Sweden)

Mustafa Ucgul

2015-09-01

Full Text Available The energy required for tillage processes accounts for a significant proportion of total energy used in crop production. In many tillage processes decreasing the draft and upward vertical forces is often desired for reduced fuel use and improved penetration, respectively. Recent studies have proved that the discrete element modelling (DEM can effectively be used to model the soil–tool interaction. In his study, Fielke (1994 [1] examined the effect of the various tool cutting edge geometries, namely; cutting edge height, length of underside rub, angle of underside clearance, on draft and vertical forces. In this paper the experimental parameters of Fielke (1994 [1] were simulated using 3D discrete element modelling techniques. In the simulations a hysteretic spring contact model integrated with a linear cohesion model that considers the plastic deformation behaviour of the soil hence provides better vertical force prediction was employed. DEM parameters were determined by comparing the experimental and simulation results of angle of repose and penetration tests. The results of the study showed that the simulation results of the soil-various tool cutting edge geometries agreed well with the experimental results of Fielke (1994 [1]. The modelling was then used to simulate a further range of cutting edge geometries to better define the effect of sweep tool cutting edge geometry parameters on tillage forces. The extra simulations were able to show that by using a sharper cutting edge with zero vertical cutting edge height the draft and upward vertical force were further reduced indicating there is benefit from having a really sharp cutting edge. The extra simulations also confirmed that the interpolated trends for angle of underside clearance as suggested by Fielke (1994 [1] where correct with a linear reduction in draft and upward vertical force for angle of underside clearance between the ranges of −25 and −5°, and between −5 and 0°. The

12. Assessing the accuracy and stability of variable selection methods for random forest modeling in ecology.

Science.gov (United States)

Fox, Eric W; Hill, Ryan A; Leibowitz, Scott G; Olsen, Anthony R; Thornbrugh, Darren J; Weber, Marc H

2017-07-01

Random forest (RF) modeling has emerged as an important statistical learning method in ecology due to its exceptional predictive performance. However, for large and complex ecological data sets, there is limited guidance on variable selection methods for RF modeling. Typically, either a preselected set of predictor variables are used or stepwise procedures are employed which iteratively remove variables according to their importance measures. This paper investigates the application of variable selection methods to RF models for predicting probable biological stream condition. Our motivating data set consists of the good/poor condition of n = 1365 stream survey sites from the 2008/2009 National Rivers and Stream Assessment, and a large set (p = 212) of landscape features from the StreamCat data set as potential predictors. We compare two types of RF models: a full variable set model with all 212 predictors and a reduced variable set model selected using a backward elimination approach. We assess model accuracy using RF's internal out-of-bag estimate, and a cross-validation procedure with validation folds external to the variable selection process. We also assess the stability of the spatial predictions generated by the RF models to changes in the number of predictors and argue that model selection needs to consider both accuracy and stability. The results suggest that RF modeling is robust to the inclusion of many variables of moderate to low importance. We found no substantial improvement in cross-validated accuracy as a result of variable reduction. Moreover, the backward elimination procedure tended to select too few variables and exhibited numerous issues such as upwardly biased out-of-bag accuracy estimates and instabilities in the spatial predictions. We use simulations to further support and generalize results from the analysis of real data. A main purpose of this work is to elucidate issues of model selection bias and instability to ecologists interested in

13. From spatially variable streamflow to distributed hydrological models: Analysis of key modeling decisions

Science.gov (United States)

Fenicia, Fabrizio; Kavetski, Dmitri; Savenije, Hubert H. G.; Pfister, Laurent

2016-02-01

This paper explores the development and application of distributed hydrological models, focusing on the key decisions of how to discretize the landscape, which model structures to use in each landscape element, and how to link model parameters across multiple landscape elements. The case study considers the Attert catchment in Luxembourg—a 300 km2 mesoscale catchment with 10 nested subcatchments that exhibit clearly different streamflow dynamics. The research questions are investigated using conceptual models applied at hydrologic response unit (HRU) scales (1-4 HRUs) on 6 hourly time steps. Multiple model structures are hypothesized and implemented using the SUPERFLEX framework. Following calibration, space/time model transferability is tested using a split-sample approach, with evaluation criteria including streamflow prediction error metrics and hydrological signatures. Our results suggest that: (1) models using geology-based HRUs are more robust and capture the spatial variability of streamflow time series and signatures better than models using topography-based HRUs; this finding supports the hypothesis that, in the Attert, geology exerts a stronger control than topography on streamflow generation, (2) streamflow dynamics of different HRUs can be represented using distinct and remarkably simple model structures, which can be interpreted in terms of the perceived dominant hydrologic processes in each geology type, and (3) the same maximum root zone storage can be used across the three dominant geological units with no loss in model transferability; this finding suggests that the partitioning of water between streamflow and evaporation in the study area is largely independent of geology and can be used to improve model parsimony. The modeling methodology introduced in this study is general and can be used to advance our broader understanding and prediction of hydrological behavior, including the landscape characteristics that control hydrologic response, the

14. The demand-control model for job strain: a commentary on different ways to operationalize the exposure variable

Directory of Open Access Journals (Sweden)

Márcia Guimarães de Mello Alves

2015-01-01

Full Text Available Demand-control has been the most widely used model to study job strain in various countries. However, researchers have used the model differently, thus hindering the comparison of results. Such heterogeneity appears in both the study instrument used and in the definition of the main exposure variable - high strain. This cross-sectional study aimed to assess differences between various ways of operationalizing job strain through association with prevalent hypertension in a cohort of workers (Pro-Health Study. No difference in the association between high job strain and hypertension was found according to the different ways of operationalizing exposure, even though prevalence varied widely, according to the adopted form, from 19.6% for quadrants to 42% for subtraction tertile. The authors recommend further studies to define the cutoff for exposure variables using combined subjective and objective data.

15. Latent variable models an introduction to factor, path, and structural equation analysis

CERN Document Server

Loehlin, John C

2004-01-01

This fourth edition introduces multiple-latent variable models by utilizing path diagrams to explain the underlying relationships in the models. The book is intended for advanced students and researchers in the areas of social, educational, clinical, ind

16. Assessing the accuracy and stability of variable selection methods for random forest modeling in ecology

Science.gov (United States)

Random forest (RF) modeling has emerged as an important statistical learning method in ecology due to its exceptional predictive performance. However, for large and complex ecological datasets there is limited guidance on variable selection methods for RF modeling. Typically, e...

17. Hardening Software Defined Networks

Science.gov (United States)

2014-07-01

the layers to act upon each other in very distinct ways. Examining the literature, we selected bipartite and tripartite network models are those...identify characteristics of multilayered networks . Bipartite and tripartite models are potentially most promising (and somewhat underutilized) in the... tripartite models are particularly well-suited to a confluence of traditional networks and software defined networks where SDN components are

18. Separation of uncertainty and interindividual variability in human exposure modeling.

NARCIS (Netherlands)

Ragas, A.M.J.; Brouwer, F.P.E.; Buchner, F.L.; Hendriks, H.W.; Huijbregts, M.A.J.

2009-01-01

The NORMTOX model predicts the lifetime-averaged exposure to contaminants through multiple environmental media, that is, food, air, soil, drinking and surface water. The model was developed to test the coherence of Dutch environmental quality objectives (EQOs). A set of EQOs is called coherent if

19. Thermodynamic consistency of viscoplastic material models involving external variable rates in the evolution equations for the internal variables

International Nuclear Information System (INIS)

Malmberg, T.

1993-09-01

The objective of this study is to derive and investigate thermodynamic restrictions for a particular class of internal variable models. Their evolution equations consist of two contributions: the usual irreversible part, depending only on the present state, and a reversible but path dependent part, linear in the rates of the external variables (evolution equations of ''mixed type''). In the first instance the thermodynamic analysis is based on the classical Clausius-Duhem entropy inequality and the Coleman-Noll argument. The analysis is restricted to infinitesimal strains and rotations. The results are specialized and transferred to a general class of elastic-viscoplastic material models. Subsequently, they are applied to several viscoplastic models of ''mixed type'', proposed or discussed in the literature (Robinson et al., Krempl et al., Freed et al.), and it is shown that some of these models are thermodynamically inconsistent. The study is closed with the evaluation of the extended Clausius-Duhem entropy inequality (concept of Mueller) where the entropy flux is governed by an assumed constitutive equation in its own right; also the constraining balance equations are explicitly accounted for by the method of Lagrange multipliers (Liu's approach). This analysis is done for a viscoplastic material model with evolution equations of the ''mixed type''. It is shown that this approach is much more involved than the evaluation of the classical Clausius-Duhem entropy inequality with the Coleman-Noll argument. (orig.) [de

20. Functionally relevant climate variables for arid lands: Aclimatic water deficit approach for modelling desert shrub distributions

Science.gov (United States)

Thomas E. Dilts; Peter J. Weisberg; Camie M. Dencker; Jeanne C. Chambers

2015-01-01

We have three goals. (1) To develop a suite of functionally relevant climate variables for modelling vegetation distribution on arid and semi-arid landscapes of the Great Basin, USA. (2) To compare the predictive power of vegetation distribution models based on mechanistically proximate factors (water deficit variables) and factors that are more mechanistically removed...

1. A Composite Likelihood Inference in Latent Variable Models for Ordinal Longitudinal Responses

Science.gov (United States)

Vasdekis, Vassilis G. S.; Cagnone, Silvia; Moustaki, Irini

2012-01-01

The paper proposes a composite likelihood estimation approach that uses bivariate instead of multivariate marginal probabilities for ordinal longitudinal responses using a latent variable model. The model considers time-dependent latent variables and item-specific random effects to be accountable for the interdependencies of the multivariate…

2. Exploratory Long-Range Models to Estimate Summer Climate Variability over Southern Africa.

Science.gov (United States)

Jury, Mark R.; Mulenga, Henry M.; Mason, Simon J.

1999-07-01

Teleconnection predictors are explored using multivariate regression models in an effort to estimate southern African summer rainfall and climate impacts one season in advance. The preliminary statistical formulations include many variables influenced by the El Niño-Southern Oscillation (ENSO) such as tropical sea surface temperatures (SST) in the Indian and Atlantic Oceans. Atmospheric circulation responses to ENSO include the alternation of tropical zonal winds over Africa and changes in convective activity within oceanic monsoon troughs. Numerous hemispheric-scale datasets are employed to extract predictors and include global indexes (Southern Oscillation index and quasi-biennial oscillation), SST principal component scores for the global oceans, indexes of tropical convection (outgoing longwave radiation), air pressure, and surface and upper winds over the Indian and Atlantic Oceans. Climatic targets include subseasonal, area-averaged rainfall over South Africa and the Zambezi river basin, and South Africa's annual maize yield. Predictors and targets overlap in the years 1971-93, the defined training period. Each target time series is fitted by an optimum group of predictors from the preceding spring, in a linear multivariate formulation. To limit artificial skill, predictors are restricted to three, providing 17 degrees of freedom. Models with colinear predictors are screened out, and persistence of the target time series is considered. The late summer rainfall models achieve a mean r2 fit of 72%, contributed largely through ENSO modulation. Early summer rainfall cross validation correlations are lower (61%). A conceptual understanding of the climate dynamics and ocean-atmosphere coupling processes inherent in the exploratory models is outlined.Seasonal outlooks based on the exploratory models could help mitigate the impacts of southern Africa's fluctuating climate. It is believed that an advance warning of drought risk and seasonal rainfall prospects will

3. Regionalizing Africa: Patterns of Precipitation Variability in Observations and Global Climate Models

Science.gov (United States)

Badr, Hamada S.; Dezfuli, Amin K.; Zaitchik, Benjamin F.; Peters-Lidard, Christa D.

2016-01-01

Many studies have documented dramatic climatic and environmental changes that have affected Africa over different time scales. These studies often raise questions regarding the spatial extent and regional connectivity of changes inferred from observations and proxies and/or derived from climate models. Objective regionalization offers a tool for addressing these questions. To demonstrate this potential, applications of hierarchical climate regionalizations of Africa using observations and GCM historical simulations and future projections are presented. First, Africa is regionalized based on interannual precipitation variability using Climate Hazards Group Infrared Precipitation with Stations (CHIRPS) data for the period 19812014. A number of data processing techniques and clustering algorithms are tested to ensure a robust definition of climate regions. These regionalization results highlight the seasonal and even month-to-month specificity of regional climate associations across the continent, emphasizing the need to consider time of year as well as research question when defining a coherent region for climate analysis. CHIRPS regions are then compared to those of five GCMs for the historic period, with a focus on boreal summer. Results show that some GCMs capture the climatic coherence of the Sahel and associated teleconnections in a manner that is similar to observations, while other models break the Sahel into uncorrelated subregions or produce a Sahel-like region of variability that is spatially displaced from observations. Finally, shifts in climate regions under projected twenty-first-century climate change for different GCMs and emissions pathways are examined. A projected change is found in the coherence of the Sahel, in which the western and eastern Sahel become distinct regions with different teleconnections. This pattern is most pronounced in high-emissions scenarios.

4. Modeling Short-Range Soil Variability and its Potential Use in Variable-Rate Treatment of Experimental Plots

Directory of Open Access Journals (Sweden)

A Moameni

2011-02-01

Full Text Available Abstract In Iran, the experimental plots under fertilizer trials are managed in such a way that the whole plot area uniformly receives agricultural inputs. This could lead to biased research results and hence to suppressing of the efforts made by the researchers. This research was conducted in a selected site belonging to the Gonbad Agricultural Research Station, located in the semiarid region, northeastern Iran. The aim was to characterize the short-range spatial variability of the inherent and management-depended soil properties and to determine if this variation is large and can be managed at practical scales. The soils were sampled using a grid 55 m apart. In total, 100 composite soil samples were collected from topsoil (0-30 cm and were analyzed for calcium carbonate equivalent, organic carbon, clay, available phosphorus, available potassium, iron, copper, zinc and manganese. Descriptive statistics were applied to check data trends. Geostatistical analysis was applied to variography, model fitting and contour mapping. Sampling at 55 m made it possible to split the area of the selected experimental plot into relatively uniform areas that allow application of agricultural inputs with variable rates. Keywords: Short-range soil variability, Within-field soil variability, Interpolation, Precision agriculture, Geostatistics

5. Simulation of heart rate variability model in a network

Science.gov (United States)

Cascaval, Radu C.; D'Apice, Ciro; D'Arienzo, Maria Pia

2017-07-01

We consider a 1-D model for the simulation of the blood flow in the cardiovascular system. As inflow condition we consider a model for the aortic valve. The opening and closing of the valve is dynamically determined by the pressure difference between the left ventricular and aortic pressures. At the outflow we impose a peripheral resistance model. To approximate the solution we use a numerical scheme based on the discontinuous Galerkin method. We also considering a variation in heart rate and terminal reflection coefficient due to monitoring of the pressure in the network.

6. Importance of predictor variables for models of chemical function

Data.gov (United States)

U.S. Environmental Protection Agency — Importance of random forest predictors for all classification models of chemical function. This dataset is associated with the following publication: Isaacs , K., M....

7. Holomorphic variables in magnetized brane models with continuous Wilson lines

CERN Document Server

Camara, Pablo G; Dudas, Emilian

2010-01-01

We analyze the action of the target-space modular group in toroidal type IIB orientifold compactifications with magnetized D-branes and continuous Wilson lines. The transformation of matter fields agree with that of twisted fields in heterotic compactifications, constituting a check of type I/heterotic duality. We identify the holomorphic N = 1 variables for these compactifications. Matter fields and closed string moduli are both redefined by open string moduli. The redefinition of matter fields can be read directly from the perturbative Yukawa couplings, whereas closed string moduli redefinitions are obtained from D-brane instanton superpotential couplings. The resulting expressions reproduce and generalize, in the presence of internal magnetic fields, previous results in the literature.

8. Latent variable modeling and its implications for institutional review board review: variables that delay the reviewing process.

Science.gov (United States)

Tzeng, Dong-Sheng; Wu, Yi-Chang; Hsu, Jane-Yi

2015-08-27

To investigate the factors related to approval after review by an Institutional Review Board (IRB), the structure equation model was used to analyze the latent variables 'investigators', 'vulnerability' and 'review process' for 221 proposals submitted to our IRB. The vulnerability factor included vulnerable cases, and studies that involved drug tests and genetic analyses. The principal investigator (PI) factor included the license level of the PI and whether they belonged to our institution. The review factor included administration time, total review time, and revision frequency. The revision frequency and total review time influenced the efficiency of review. The latent variable of reviewing was the most important factor mediating the PIs and vulnerability to IRB review approval. The local PIs moderated with genetic study and revision frequency had an impact on the review process and mediated non-approval. Better guidance of the investigators and reviewers might improve the efficiency with which IRBs function.

9. The necessity of connection structures in neural models of variable binding.

Science.gov (United States)

van der Velde, Frank; de Kamps, Marc

2015-08-01

In his review of neural binding problems, Feldman (Cogn Neurodyn 7:1-11, 2013) addressed two types of models as solutions of (novel) variable binding. The one type uses labels such as phase synchrony of activation. The other ('connectivity based') type uses dedicated connections structures to achieve novel variable binding. Feldman argued that label (synchrony) based models are the only possible candidates to handle novel variable binding, whereas connectivity based models lack the flexibility required for that. We argue and illustrate that Feldman's analysis is incorrect. Contrary to his conclusion, connectivity based models are the only viable candidates for models of novel variable binding because they are the only type of models that can produce behavior. We will show that the label (synchrony) based models analyzed by Feldman are in fact examples of connectivity based models. Feldman's analysis that novel variable binding can be achieved without existing connection structures seems to result from analyzing the binding problem in a wrong frame of reference, in particular in an outside instead of the required inside frame of reference. Connectivity based models can be models of novel variable binding when they possess a connection structure that resembles a small-world network, as found in the brain. We will illustrate binding with this type of model with episode binding and the binding of words, including novel words, in sentence structures.

10. A Simple Model of the Variability of Soil Depths

Directory of Open Access Journals (Sweden)

Fang Yu

2017-06-01

Full Text Available Soil depth tends to vary from a few centimeters to several meters, depending on many natural and environmental factors. We hypothesize that the cumulative effect of these factors on soil depth, which is chiefly dependent on the process of biogeochemical weathering, is particularly affected by soil porewater (i.e., solute transport and infiltration from the land surface. Taking into account evidence for a non-Gaussian distribution of rock weathering rates, we propose a simple mathematical model to describe the relationship between soil depth and infiltration flux. The model was tested using several areas in mostly semi-arid climate zones. The application of this model demonstrates the use of fundamental principles of physics to quantify the coupled effects of the five principal soil-forming factors of Dokuchaev.

11. Perturbative corrections for approximate inference in gaussian latent variable models

DEFF Research Database (Denmark)

Opper, Manfred; Paquet, Ulrich; Winther, Ole

2013-01-01

orders, corrections of increasing polynomial complexity can be applied to the approximation. The second order provides a correction in quadratic time, which we apply to an array of Gaussian process and Ising models. The corrections generalize to arbitrarily complex approximating families, which we...... illustrate on tree-structured Ising model approximations. Furthermore, they provide a polynomial-time assessment of the approximation error. We also provide both theoretical and practical insights on the exactness of the EP solution. © 2013 Manfred Opper, Ulrich Paquet and Ole Winther....

12. Perturbative corrections for approximate inference in gaussian latent variable models

DEFF Research Database (Denmark)

Opper, Manfred; Paquet, Ulrich; Winther, Ole

2013-01-01

Expectation Propagation (EP) provides a framework for approximate inference. When the model under consideration is over a latent Gaussian field, with the approximation being Gaussian, we show how these approximations can systematically be corrected. A perturbative expansion is made of the exact b...... illustrate on tree-structured Ising model approximations. Furthermore, they provide a polynomial-time assessment of the approximation error. We also provide both theoretical and practical insights on the exactness of the EP solution. © 2013 Manfred Opper, Ulrich Paquet and Ole Winther....

13. On the significance of contaminant plume-scale and dose-response models in defining hydrogeological characterization needs

Science.gov (United States)

de Barros, F.; Rubin, Y.; Maxwell, R.; Bai, H.

2007-12-01

Defining rational and effective hydrogeological data acquisition strategies is of crucial importance since financial resources available for such efforts are always limited. Usually such strategies are developed with the goal of reducing uncertainty, but less often they are developed in the context of the impacts of uncertainty. This paper presents an approach for determining site characterization needs based on human health risk factors. The main challenge is in striking a balance between improved definition of hydrogeological, behavioral and physiological parameters. Striking this balance can provide clear guidance on setting priorities for data acquisition and for better estimating adverse health effects in humans. This paper addresses this challenge through theoretical developments and numerical testing. We will report on a wide range of factors that affect the site characterization needs including contaminant plume's dimensions, travel distances and other length scales that characterize the transport problem, as well as health risk models. We introduce a new graphical tool that allows one to investigate the relative impact of hydrogeological and physiological parameters in risk. Results show that the impact of uncertainty reduction in the risk-related parameters decreases with increasing distances from the contaminant source. Also, results indicate that human health risk becomes less sensitive to hydrogeological measurements when dealing with ergodic plumes. This indicates that under ergodic conditions, uncertainty reduction in human health risk may benefit from better understanding of the physiological component as opposed to a detailed hydrogeological characterization

14. Modeling of carbon sequestration in coal-beds: A variable saturated simulation

International Nuclear Information System (INIS)

Liu Guoxiang; Smirnov, Andrei V.

2008-01-01

Storage of carbon dioxide in deep coal seams is a profitable method to reduce the concentration of green house gases in the atmosphere while the methane as a byproduct can be extracted during carbon dioxide injection into the coal seam. In this procedure, the key element is to keep carbon dioxide in the coal seam without escaping for a long term. It is depended on many factors such as properties of coal basin, fracture state, phase equilibrium, etc., especially the porosity, permeability and saturation of the coal seam. In this paper, a variable saturation model was developed to predict the capacity of carbon dioxide sequestration and coal-bed methane recovery. This variable saturation model can be used to track the saturation variability with the partial pressures change caused by carbon dioxide injection. Saturation variability is a key factor to predict the capacity of carbon dioxide storage and methane recovery. Based on this variable saturation model, a set of related variables including capillary pressure, relative permeability, porosity, coupled adsorption model, concentration and temperature equations were solved. From results of the simulation, historical data agree with the variable saturation model as well as the adsorption model constructed by Langmuir equations. The Appalachian basin, as an example, modeled the carbon dioxide sequestration in this paper. The results of the study and the developed models can provide the projections for the CO 2 sequestration and methane recovery in coal-beds within different regional specifics

15. Solute transport modelling with the variable temporally dependent ...

Pintu Das

2018-02-07

Feb 7, 2018 ... In this present study, analytical and numerical solutions are obtained for solute transport modelling in homogeneous ..... Clay (0.40). Analytical solution. Numerical solution. Figure 3. Comparison of concentration distribution for sinu- soidal velocity pattern for boundary condition c0. 2 1 ю sec wt р. Ю.

16. A Model of Human Variability in Viable Ship Design

Science.gov (United States)

2014-02-21

each individual with a shared awareness of who knows what ( Wegner , 1987) Processes and outcomes through which groups acquire, share, and...2004). Effects of adaptive behaviors and shared mental models on control crew performance. Management Science, 50, 1534-1544. Wegner , D. M. (1987

17. Modeling Selected Climatic Variables in Ibadan, Oyo State, Nigeria ...

African Journals Online (AJOL)

PROF. O. E. OSUAGWU

2013-09-01

Sep 1, 2013 ... The aim of this study was fitting the modified generalized burr density function to total rainfall and temperature data obtained from the meteorological unit in the Department of. Environmental Modelling and Management of the Forestry Research Institute of Nigeria. (FRIN) in Ibadan, Oyo State, Nigeria.

18. Rasch's model for reading speed with manifest explanatory variables

NARCIS (Netherlands)

Jansen, G.G.H.

In educational and psychological measurement we find the distinction between speed and power tests. Although most tests are partially speeded, the speed element is usually neglected. Here we consider a latent trait model developed by Rasch for the response time on a (set of) pure speed test(s),

19. QUANTIFYING SUBGRID POLLUTANT VARIABILITY IN EULERIAN AIR QUALITY MODELS

Science.gov (United States)

In order to properly assess human risk due to exposure to hazardous air pollutants or air toxics, detailed information is needed on the location and magnitude of ambient air toxic concentrations. Regional scale Eulerian air quality models are typically limited to relatively coar...

20. Variable thickness transient groundwater flow model theory and numerical implementation

International Nuclear Information System (INIS)

Kipp, K.L.; Reisenauer, A.E.; Cole, C.R.; Bryan, C.A.

1976-01-01

Modeling of radionuclide movement in the groundwater system beneath the Hanford Reservation requires mathematical simulation of the two-dimensional flow in the unconfined aquifer. This was accomplished using the nonlinear, transient Boussinesq equation with appropriate initial and boundary conditions, including measured Columbia River stages and rates of wastewater disposal to the ground. The heterogeneous permeability (hydraulic conductivity) distribution was derived by solution of the Boussinesq equation along instantaneous streamtubes of flow employing a measured water table surface and a limited number of field-measured hydraulic conductivity values. Use of a successive line over-relaxation technique with unequal time steps resulted in a more rapid convergence of the numerical solution than with previous techniques. The model was used to simulate the water table changes for the period 1968 through 1973 using known inputs and boundary conditions. A comparison of calculated and measured water table elevations was made at specific well locations and the quality of the verification simulation was evaluated using a data retrieval and display system. Agreement between the model results and measured data was good over two-thirds of the Hanford Reservation. The capability of the model to simulate flow with time-varying boundary conditions, complex boundary shapes, and a heterogeneous distribution of aquifer properties was demonstrated

1. Solute transport modelling with the variable temporally dependent ...

Pintu Das

2018-02-07

Feb 7, 2018 ... Abstract. In this present study, analytical and numerical solutions are obtained for solute transport modelling in homogeneous semi-infinite porous medium. The dispersion coefficient is assumed to be initial dispersion and velocity is assumed to be temporally dependent with initial seepage velocity. Also ...

2. Models of Solar Irradiance Variability and the Instrumental Temperature Record

Science.gov (United States)

Marcus, S. L.; Ghil, M.; Ide, K.

1998-01-01

The effects of decade-to-century (Dec-Cen) variations in total solar irradiance (TSI) on global mean surface temperature Ts during the pre-Pinatubo instrumental era (1854-1991) are studied by using two different proxies for TSI and a simplified version of the IPCC climate model.

3. Two-Step Estimation of Models Between Latent Classes and External Variables.

Science.gov (United States)

Bakk, Zsuzsa; Kuha, Jouni

2017-11-17

We consider models which combine latent class measurement models for categorical latent variables with structural regression models for the relationships between the latent classes and observed explanatory and response variables. We propose a two-step method of estimating such models. In its first step, the measurement model is estimated alone, and in the second step the parameters of this measurement model are held fixed when the structural model is estimated. Simulation studies and applied examples suggest that the two-step method is an attractive alternative to existing one-step and three-step methods. We derive estimated standard errors for the two-step estimates of the structural model which account for the uncertainty from both steps of the estimation, and show how the method can be implemented in existing software for latent variable modelling.

4. Analytical Model for LLC Resonant Converter With Variable Duty-Cycle Control

DEFF Research Database (Denmark)

Shen, Yanfeng; Wang, Huai; Blaabjerg, Frede

2016-01-01

are identified and discussed. The proposed model enables a better understanding of the operation characteristics and fast parameter design of the LLC converter, which otherwise cannot be achieved by the existing simulation based methods and numerical models. The results obtained from the proposed model......In LLC resonant converters, the variable duty-cycle control is usually combined with a variable frequency control to widen the gain range, improve the light-load efficiency, or suppress the inrush current during start-up. However, a proper analytical model for the variable duty-cycle controlled LLC...... converter is still not available due to the complexity of operation modes and the nonlinearity of steady-state equations. This paper makes the efforts to develop an analytical model for the LLC converter with variable duty-cycle control. All possible operation models and critical operation characteristics...

5. A step-indexed Kripke model of hidden state via recursive properties on recursively defined metric spaces

DEFF Research Database (Denmark)

Birkedal, Lars; Schwinghammer, Jan; Støvring, Kristian

2010-01-01

for Chargu´eraud and Pottier’s type and capability system including frame and anti-frame rules, based on the operational semantics and step-indexed heap relations. The worlds are constructed as a recursively defined predicate on a recursively defined metric space, which provides a considerably simpler...

6. Biofidelic Human Activity Modeling and Simulation with Large Variability

Science.gov (United States)

2014-11-25

exact match or a close representation. Efforts were made to ensure that the activity models can be integrated into widely used game engines and image...integrated into widely used game engines and image generators. ABOUT THE AUTHORS Dr. John Camp is a computer research scientist employed by AFRL. Dr...M&S) has been increasingly used in simulation-based training and virtual reality ( VR ). However, human M&S technology currently used in various

7. A Variable Flow Modelling Approach To Military End Strength Planning

Science.gov (United States)

2016-12-01

behaviours of a system and how the behaviours are influenced by...Markov Chain Models Wang describes Markov chain theory as a mathematical tool used to investigate dynamic behaviours of a system in a discrete-time... Organisation . 52 THIS PAGE INTENTIONALLY LEFT BLANK 53 INITIAL DISTRIBUTION LIST 1. Defense Technical Information Center Ft Belvoir Virginia 2. Dudley Knox Library Naval Postgraduate School Monterey, California

8. Validity and Variability of Animal Models Used in Dentistry

Directory of Open Access Journals (Sweden)

2015-01-01

Full Text Available Background: Animal models have contributed to dental literature for several decades. The major aim of this review was to outline tooth development stages in mice, and attempt to addressing potential strain differences. A literature review was performed using electronic and hand-searching methods for the animal models in dentistry with special emphasis on mice and dentistry. Root canal development in both C57BL/6 and BALB/c strains were investigated. There are a number of published reports regarding the morphogenesis and molecular reaction and maturation stages of mice molars. We observed some similarity between the mice and human odontegeneis as primary factor for tooth development. Although mice may present some technical challenges, including the small size of the mouse molars, they have similar stages as humans for molar development, and can be used to monitor the effects of various biomaterials, regeneration, and remodeling. Thus, mice provide an ideal alternative model to study developmental and regenerative processes in dentistry.

9. AeroPropulsoServoElasticity: Dynamic Modeling of the Variable Cycle Propulsion System

Science.gov (United States)

Kopasakis, George

2012-01-01

This presentation was made at the 2012 Fundamental Aeronautics Program Technical Conference and it covers research work for the Dynamic Modeling of the Variable cycle Propulsion System that was done under the Supersonics Project, in the area of AeroPropulsoServoElasticity. The presentation covers the objective for the propulsion system dynamic modeling work, followed by the work that has been done so far to model the variable Cycle Engine, modeling of the inlet, the nozzle, the modeling that has been done to model the affects of flow distortion, and finally presenting some concluding remarks and future plans.

10. Replicates in high dimensions, with applications to latent variable graphical models.

Science.gov (United States)

Tan, Kean Ming; Ning, Yang; Witten, Daniela M; Liu, Han

2016-12-01

In classical statistics, much thought has been put into experimental design and data collection. In the high-dimensional setting, however, experimental design has been less of a focus. In this paper, we stress the importance of collecting multiple replicates for each subject in this setting. We consider learning the structure of a graphical model with latent variables, under the assumption that these variables take a constant value across replicates within each subject. By collecting multiple replicates for each subject, we are able to estimate the conditional dependence relationships among the observed variables given the latent variables. To test the null hypothesis of conditional independence between two observed variables, we propose a pairwise decorrelated score test. Theoretical guarantees are established for parameter estimation and for this test. We show that our proposal is able to estimate latent variable graphical models more accurately than some existing proposals, and apply the proposed method to a brain imaging dataset.

11. Finite analytic method for modeling variably saturated flows.

Science.gov (United States)

Zhang, Zaiyong; Wang, Wenke; Gong, Chengcheng; Yeh, Tian-Chyi Jim; Wang, Zhoufeng; Wang, Yu-Li; Chen, Li

2018-04-15

This paper develops a finite analytic method (FAM) for solving the two-dimensional Richards' equation. The FAM incorporates the analytic solution in local elements to formulate the algebraic representation of the partial differential equation of unsaturated flow so as to effectively control both numerical oscillation and dispersion. The FAM model is then verified using four examples, in which the numerical solutions are compared with analytical solutions, solutions from VSAFT2, and observational data from a field experiment. These numerical experiments show that the method is not only accurate but also efficient, when compared with other numerical methods. Copyright © 2017 Elsevier B.V. All rights reserved.

12. Solar spectral irradiance variability in cycle 24: observations and models

Science.gov (United States)

Marchenko, Sergey V.; DeLand, Matthew T.; Lean, Judith L.

2016-12-01

Utilizing the excellent stability of the Ozone Monitoring Instrument (OMI), we characterize both short-term (solar rotation) and long-term (solar cycle) changes of the solar spectral irradiance (SSI) between 265 and 500 nm during the ongoing cycle 24. We supplement the OMI data with concurrent observations from the Global Ozone Monitoring Experiment-2 (GOME-2) and Solar Radiation and Climate Experiment (SORCE) instruments and find fair-to-excellent, depending on wavelength, agreement among the observations, and predictions of the Naval Research Laboratory Solar Spectral Irradiance (NRLSSI2) and Spectral And Total Irradiance REconstruction for the Satellite era (SATIRE-S) models.

13. Solar spectral irradiance variability in cycle 24: observations and models

Directory of Open Access Journals (Sweden)

Marchenko Sergey V.

2016-01-01

Full Text Available Utilizing the excellent stability of the Ozone Monitoring Instrument (OMI, we characterize both short-term (solar rotation and long-term (solar cycle changes of the solar spectral irradiance (SSI between 265 and 500 nm during the ongoing cycle 24. We supplement the OMI data with concurrent observations from the Global Ozone Monitoring Experiment-2 (GOME-2 and Solar Radiation and Climate Experiment (SORCE instruments and find fair-to-excellent, depending on wavelength, agreement among the observations, and predictions of the Naval Research Laboratory Solar Spectral Irradiance (NRLSSI2 and Spectral And Total Irradiance REconstruction for the Satellite era (SATIRE-S models.

14. Influence of variable selection on partial least squares discriminant analysis models for explosive residue classification

Energy Technology Data Exchange (ETDEWEB)

De Lucia, Frank C., E-mail: frank.delucia@us.army.mil; Gottfried, Jennifer L.

2011-02-15

Using a series of thirteen organic materials that includes novel high-nitrogen energetic materials, conventional organic military explosives, and benign organic materials, we have demonstrated the importance of variable selection for maximizing residue discrimination with partial least squares discriminant analysis (PLS-DA). We built several PLS-DA models using different variable sets based on laser induced breakdown spectroscopy (LIBS) spectra of the organic residues on an aluminum substrate under an argon atmosphere. The model classification results for each sample are presented and the influence of the variables on these results is discussed. We found that using the whole spectra as the data input for the PLS-DA model gave the best results. However, variables due to the surrounding atmosphere and the substrate contribute to discrimination when the whole spectra are used, indicating this may not be the most robust model. Further iterative testing with additional validation data sets is necessary to determine the most robust model.

15. Impulsive synchronization and parameter mismatch of the three-variable autocatalator model

International Nuclear Information System (INIS)

Li, Yang; Liao, Xiaofeng; Li, Chuandong; Huang, Tingwen; Yang, Degang

2007-01-01

The synchronization problems of the three-variable autocatalator model via impulsive control approach are investigated; several theorems on the stability of impulsive control systems are also investigated. These theorems are then used to find the conditions under which the three-variable autocatalator model can be asymptotically controlled to the equilibrium point. This Letter derives some sufficient conditions for the stabilization and synchronization of a three-variable autocatalator model via impulsive control with varying impulsive intervals. Furthermore, we address the chaos quasi-synchronization in the presence of single-parameter mismatch. To illustrate the effectiveness of the new scheme, several numerical examples are given

16. Higher-Order Process Modeling: Product-Lining, Variability Modeling and Beyond

Directory of Open Access Journals (Sweden)

Johannes Neubauer

2013-09-01

Full Text Available We present a graphical and dynamic framework for binding and execution of business process models. It is tailored to integrate 1 ad hoc processes modeled graphically, 2 third party services discovered in the (Internet, and 3 (dynamically synthesized process chains that solve situation-specific tasks, with the synthesis taking place not only at design time, but also at runtime. Key to our approach is the introduction of type-safe stacked second-order execution contexts that allow for higher-order process modeling. Tamed by our underlying strict service-oriented notion of abstraction, this approach is tailored also to be used by application experts with little technical knowledge: users can select, modify, construct and then pass (component processes during process execution as if they were data. We illustrate the impact and essence of our framework along a concrete, realistic (business process modeling scenario: the development of Springer's browser-based Online Conference Service (OCS. The most advanced feature of our new framework allows one to combine online synthesis with the integration of the synthesized process into the running application. This ability leads to a particularly flexible way of implementing self-adaption, and to a particularly concise and powerful way of achieving variability not only at design time, but also at runtime.

17. Defining pediatric inpatient cardiology care delivery models: A survey of pediatric cardiology programs in the USA and Canada.

Science.gov (United States)

Mott, Antonio R; Neish, Steven R; Challman, Melissa; Feltes, Timothy F

2017-05-01

The treatment of children with cardiac disease is one of the most prevalent and costly pediatric inpatient conditions. The design of inpatient medical services for children admitted to and discharged from noncritical cardiology care units, however, is undefined. North American Pediatric Cardiology Programs were surveyed to define noncritical cardiac care unit models in current practice. An online survey that explored institutional and functional domains for noncritical cardiac care unit was crafted. All questions were multi-choice with comment boxes for further explanation. The survey was distributed by email four times over a 5-month period. Most programs (n = 45, 60%) exist in free-standing children's hospitals. Most programs cohort cardiac patients on noncritical cardiac care units that are restricted to cardiac patients in 39 (54%) programs or restricted to cardiac and other subspecialty patients in 23 (32%) programs. The most common frontline providers are categorical pediatric residents (n = 58, 81%) and nurse practitioners (n = 48, 67%). However, nurse practitioners are autonomous providers in only 21 (29%) programs. Only 33% of programs use a postoperative fast-track protocol. When transitioning care to referring physicians, most programs (n = 53, 72%) use facsimile to deliver pertinent patient information. Twenty-two programs (31%) use email to transition care, and eighteen (25%) programs use verbal communication. Most programs exist in free-standing children's hospitals in which the noncritical cardiac care units are in some form restricted to cardiac patients. While nurse practitioners are used on most noncritical cardiac care units, they rarely function as autonomous providers. The majority of programs in this survey do not incorporate any postoperative fast-track protocols in their practice. Given the current era of focused handoffs within hospital systems, relatively few programs utilize verbal handoffs to the referring pediatric

18. A Time-Series Water Level Forecasting Model Based on Imputation and Variable Selection Method

Directory of Open Access Journals (Sweden)

Jun-He Yang

2017-01-01

Full Text Available Reservoirs are important for households and impact the national economy. This paper proposed a time-series forecasting model based on estimating a missing value followed by variable selection to forecast the reservoir’s water level. This study collected data from the Taiwan Shimen Reservoir as well as daily atmospheric data from 2008 to 2015. The two datasets are concatenated into an integrated dataset based on ordering of the data as a research dataset. The proposed time-series forecasting model summarily has three foci. First, this study uses five imputation methods to directly delete the missing value. Second, we identified the key variable via factor analysis and then deleted the unimportant variables sequentially via the variable selection method. Finally, the proposed model uses a Random Forest to build the forecasting model of the reservoir’s water level. This was done to compare with the listing method under the forecasting error. These experimental results indicate that the Random Forest forecasting model when applied to variable selection with full variables has better forecasting performance than the listing model. In addition, this experiment shows that the proposed variable selection can help determine five forecast methods used here to improve the forecasting capability.

19. THE EFFECTS OF DIFFERENT MODELS OF SWIMMING TRAINING (DEFINED IN RELATION TO ANAEROBIC THRESHOLD ON THE INCREASE OF SWIM SPEED

Directory of Open Access Journals (Sweden)

Dragan Krivokapić

2007-05-01

Full Text Available On the sample of 32 fourth grade students of some Belgrade highs schools, who had the physical education classes carried out at the city’s swimming pools, an attempt was made to evaluate the effects of the two different programmes of swimming training in different intensity zones, defi ned relative to the anaerobic threshold. The examinees were divided into two groups out of 15 i.e. 17 participants who were not (according to statistics signifi cantly different in terms of average time and heart frequency during the 400 m swimming test and heart frequency and time measured after 50 m in the moment of reaching the anaerobic threshold. The fi rst training model consisted of swimming at the intensity level within the zone below anaerobic threshold, while the second model involved occasional swimming at a higher intensity sometimes surpassing the anaerobic threshold. The experimentalprogramme with both sub-groups lasted 8 weeks with 3 training sessions per week, 2 ‘of which we’re identical for both experimental groups, with the third one differing regarding the swimming intensity, this in the fi rst group being still in the zone below, and in the second group occasionally in the zone above the anaerobic threshold. The amount of training and the duration were the same in both programmes. The aim of the research , was to evaluate and to compare the effects of the two training models, using as the basic criteria possible changes of average time and heart frequency during the 400 m swimming test and heart frequency and time measured after 50 m in the moment of reaching the anaerobic thereshold. On the basis of the statistical analysis of the obtained data, it is possible to conclude that in both experimental groups there were statistically signifi cant changes of average values concerning all the physiological variables. Although the difference in effi ciency of applied experimental programmes is not defi ned, we can claim that both of experimental

20. Spatial aggregation for crop modelling at regional scales: the effects of soil variability

Science.gov (United States)

Coucheney, Elsa; Villa, Ana; Eckersten, Henrik; Hoffmann, Holger; Jansson, Per-Erik; Gaiser, Thomas; Ewert, Franck; Lewan, Elisabet

2017-04-01

Modelling agriculture production and adaptation to the environment at regional or global scale receives much interest in the context of climate change. Process-based soil-crop models describe the flows of mass (i.e. water, carbon and nitrogen) and energy in the soil-plant-atmosphere system. As such, they represent valuable tools for predicting agricultural production in diverse agro-environmental contexts as well as for assessing impacts on the environment; e.g. leaching of nitrates, changes in soil carbon content and GHGs emissions. However, their application at regional and global scales for climate change impact studies raises new challenges related to model input data, calibration and evaluation. One major concern is to take into account the spatial variability of the environmental conditions (e.g. climate, soils, management practices) used as model input and because the impacts of climate change on cropping systems depend strongly on the site conditions and properties (1). For example climate change effects on yield can be either negative or positive depending on the soil type (2). Additionally, the use of different methods of upscaling and downscaling adds new sources of modelling uncertainties (3). In the present study, the effect of aggregating soil input data by area majority of soil mapping units was explored for spatially gridded simulations with the soil-vegetation model CoupModel for a region in Germany (North Rhine-Westphalia, NRW). The data aggregation effect (DAE) was analysed for wheat yield, water drainage, soil carbon mineralisation and nitrogen leaching below the root zone. DAE was higher for soil C and N variables than for yield and drainage and were strongly related to the spatial coverage of specific soils within the study region. These 'key soils' were identified by a model sensitivity analysis to soils present in the NRW region. The spatial aggregation of the key soils additionally influenced the DAE. Our results suggest that a spatial

1. Research and development of models and instruments to define, measure, and improve shared information processing with government oversight agencies. An analysis of the literature, August 1990--January 1992

Energy Technology Data Exchange (ETDEWEB)

1992-12-31

This document identifies elements of sharing, plus key variables of each and their interrelationships. The document`s model of sharing is intended to help management systems` users understand what sharing is and how to integrate it with information processing.

2. Models for Very Rapid High-Energy γ-Ray Variability in Blazars G. E. ...

Blazars display rapid variability across the entire electromagnetic spectrum. Vari- ability at high energies on timescales of a few minutes has been observed for some of them, such as PKS 2155−304 (e.g., Aharonian et al. 2007). This discovery has led to the formulation of a large variety of models for non-thermal variability in ...

3. Bayesian Methods for Analyzing Structural Equation Models with Covariates, Interaction, and Quadratic Latent Variables

Science.gov (United States)

Lee, Sik-Yum; Song, Xin-Yuan; Tang, Nian-Sheng

2007-01-01

The analysis of interaction among latent variables has received much attention. This article introduces a Bayesian approach to analyze a general structural equation model that accommodates the general nonlinear terms of latent variables and covariates. This approach produces a Bayesian estimate that has the same statistical optimal properties as a…

4. Theoretical investigations of the new Cokriging method for variable-fidelity surrogate modeling

DEFF Research Database (Denmark)

Zimmermann, Ralf; Bertram, Anna

2017-01-01

Cokriging is a variable-fidelity surrogate modeling technique which emulates a target process based on the spatial correlation of sampled data of different levels of fidelity. In this work, we address two theoretical questions associated with the so-called new Cokriging method for variable fidelity...

5. Micro-macro multilevel latent class models with multiple discrete individual-level variables

NARCIS (Netherlands)

Bennink, M.; Croon, M.A.; Kroon, B.; Vermunt, J.K.

2016-01-01

An existing micro-macro method for a single individual-level variable is extended to the multivariate situation by presenting two multilevel latent class models in which multiple discrete individual-level variables are used to explain a group-level outcome. As in the univariate case, the

6. Modeling of Mesoscale Variability in Biofilm Shear Behavior.

Directory of Open Access Journals (Sweden)

Pallab Barai

Full Text Available Formation of bacterial colonies as biofilm on the surface/interface of various objects has the potential to impact not only human health and disease but also energy and environmental considerations. Biofilms can be regarded as soft materials, and comprehension of their shear response to external forces is a key element to the fundamental understanding. A mesoscale model has been presented in this article based on digitization of a biofilm microstructure. Its response under externally applied shear load is analyzed. Strain stiffening type behavior is readily observed under high strain loads due to the unfolding of chains within soft polymeric substrate. Sustained shear loading of the biofilm network results in strain localization along the diagonal direction. Rupture of the soft polymeric matrix can potentially reduce the intercellular interaction between the bacterial cells. Evolution of stiffness within the biofilm network under shear reveals two regimes: a initial increase in stiffness due to strain stiffening of polymer matrix, and b eventual reduction in stiffness because of tear in polymeric substrate.

7. a Latent Variable Path Analysis Model of Secondary Physics Enrollments in New York State.

Science.gov (United States)

Sobolewski, Stanley John

The Percentage of Enrollment in Physics (PEP) at the secondary level nationally has been approximately 20% for the past few decades. For a more scientifically literate citizenry as well as specialists to continue scientific research and development, it is desirable that more students enroll in physics. Some of the predictor variables for physics enrollment and physics achievement that have been identified previously includes a community's socioeconomic status, the availability of physics, the sex of the student, the curriculum, as well as teacher and student data. This study isolated and identified predictor variables for PEP of secondary schools in New York. Data gathered by the State Education Department for the 1990-1991 school year was used. The source of this data included surveys completed by teachers and administrators on student characteristics and school facilities. A data analysis similar to that done by Bryant (1974) was conducted to determine if the relationships between a set of predictor variables related to physics enrollment had changed in the past 20 years. Variables which were isolated included: community, facilities, teacher experience, number of type of science courses, school size and school science facilities. When these variables were isolated, latent variable path diagrams were proposed and verified by the Linear Structural Relations computer modeling program (LISREL). These diagrams differed from those developed by Bryant in that there were more manifest variables used which included achievement scores in the form of Regents exam results. Two criterion variables were used, percentage of students enrolled in physics (PEP) and percent of students enrolled passing the Regents physics exam (PPP). The first model treated school and community level variables as exogenous while the second model treated only the community level variables as exogenous. The goodness of fit indices for the models was 0.77 for the first model and 0.83 for the second

8. Effect of climate variables on cocoa black pod incidence in Sabah using ARIMAX model

Science.gov (United States)

Ling Sheng Chang, Albert; Ramba, Haya; Mohd. Jaaffar, Ahmad Kamil; Kim Phin, Chong; Chong Mun, Ho

2016-06-01

Cocoa black pod disease is one of the major diseases affecting the cocoa production in Malaysia and also around the world. Studies have shown that the climate variables have influenced the cocoa black pod disease incidence and it is important to quantify the black pod disease variation due to the effect of climate variables. Application of time series analysis especially auto-regressive moving average (ARIMA) model has been widely used in economics study and can be used to quantify the effect of climate variables on black pod incidence to forecast the right time to control the incidence. However, ARIMA model does not capture some turning points in cocoa black pod incidence. In order to improve forecasting performance, other explanatory variables such as climate variables should be included into ARIMA model as ARIMAX model. Therefore, this paper is to study the effect of climate variables on the cocoa black pod disease incidence using ARIMAX model. The findings of the study showed ARIMAX model using MA(1) and relative humidity at lag 7 days, RHt - 7 gave better R square value compared to ARIMA model using MA(1) which could be used to forecast the black pod incidence to assist the farmers determine timely application of fungicide spraying and culture practices to control the black pod incidence.

9. Predictive and Descriptive CoMFA Models: The Effect of Variable Selection.

Science.gov (United States)

Sepehri, Bakhtyar; Omidikia, Nematollah; Kompany-Zareh, Mohsen; Ghavami, Raouf

2018-01-01

Aims & Scope: In this research, 8 variable selection approaches were used to investigate the effect of variable selection on the predictive power and stability of CoMFA models. Three data sets including 36 EPAC antagonists, 79 CD38 inhibitors and 57 ATAD2 bromodomain inhibitors were modelled by CoMFA. First of all, for all three data sets, CoMFA models with all CoMFA descriptors were created then by applying each variable selection method a new CoMFA model was developed so for each data set, 9 CoMFA models were built. Obtained results show noisy and uninformative variables affect CoMFA results. Based on created models, applying 5 variable selection approaches including FFD, SRD-FFD, IVE-PLS, SRD-UVEPLS and SPA-jackknife increases the predictive power and stability of CoMFA models significantly. Among them, SPA-jackknife removes most of the variables while FFD retains most of them. FFD and IVE-PLS are time consuming process while SRD-FFD and SRD-UVE-PLS run need to few seconds. Also applying FFD, SRD-FFD, IVE-PLS, SRD-UVE-PLS protect CoMFA countor maps information for both fields. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

10. Physiologically based pharmacokinetic (PBPK) modeling of interstrain variability in trichloroethylene metabolism in the mouse.

Science.gov (United States)

Chiu, Weihsueh A; Campbell, Jerry L; Clewell, Harvey J; Zhou, Yi-Hui; Wright, Fred A; Guyton, Kathryn Z; Rusyn, Ivan

2014-05-01

Quantitative estimation of toxicokinetic variability in the human population is a persistent challenge in risk assessment of environmental chemicals. Traditionally, interindividual differences in the population are accounted for by default assumptions or, in rare cases, are based on human toxicokinetic data. We evaluated the utility of genetically diverse mouse strains for estimating toxicokinetic population variability for risk assessment, using trichloroethylene (TCE) metabolism as a case study. We used data on oxidative and glutathione conjugation metabolism of TCE in 16 inbred and 1 hybrid mouse strains to calibrate and extend existing physiologically based pharmacokinetic (PBPK) models. We added one-compartment models for glutathione metabolites and a two-compartment model for dichloroacetic acid (DCA). We used a Bayesian population analysis of interstrain variability to quantify variability in TCE metabolism. Concentration-time profiles for TCE metabolism to oxidative and glutathione conjugation metabolites varied across strains. Median predictions for the metabolic flux through oxidation were less variable (5-fold range) than that through glutathione conjugation (10-fold range). For oxidative metabolites, median predictions of trichloroacetic acid production were less variable (2-fold range) than DCA production (5-fold range), although the uncertainty bounds for DCA exceeded the predicted variability. Population PBPK modeling of genetically diverse mouse strains can provide useful quantitative estimates of toxicokinetic population variability. When extrapolated to lower doses more relevant to environmental exposures, mouse population-derived variability estimates for TCE metabolism closely matched population variability estimates previously derived from human toxicokinetic studies with TCE, highlighting the utility of mouse interstrain metabolism studies for addressing toxicokinetic variability.

11. Quantifying variability in earthquake rupture models using multidimensional scaling: application to the 2011 Tohoku earthquake

KAUST Repository

Razafindrakoto, Hoby

2015-04-22

Finite-fault earthquake source inversion is an ill-posed inverse problem leading to non-unique solutions. In addition, various fault parametrizations and input data may have been used by different researchers for the same earthquake. Such variability leads to large intra-event variability in the inferred rupture models. One way to understand this problem is to develop robust metrics to quantify model variability. We propose a Multi Dimensional Scaling (MDS) approach to compare rupture models quantitatively. We consider normalized squared and grey-scale metrics that reflect the variability in the location, intensity and geometry of the source parameters. We test the approach on two-dimensional random fields generated using a von Kármán autocorrelation function and varying its spectral parameters. The spread of points in the MDS solution indicates different levels of model variability. We observe that the normalized squared metric is insensitive to variability of spectral parameters, whereas the grey-scale metric is sensitive to small-scale changes in geometry. From this benchmark, we formulate a similarity scale to rank the rupture models. As case studies, we examine inverted models from the Source Inversion Validation (SIV) exercise and published models of the 2011 Mw 9.0 Tohoku earthquake, allowing us to test our approach for a case with a known reference model and one with an unknown true solution. The normalized squared and grey-scale metrics are respectively sensitive to the overall intensity and the extension of the three classes of slip (very large, large, and low). Additionally, we observe that a three-dimensional MDS configuration is preferable for models with large variability. We also find that the models for the Tohoku earthquake derived from tsunami data and their corresponding predictions cluster with a systematic deviation from other models. We demonstrate the stability of the MDS point-cloud using a number of realizations and jackknife tests, for

12. A novel methodology improves reservoir characterization models using geologic fuzzy variables

Energy Technology Data Exchange (ETDEWEB)

Soto B, Rodolfo [DIGITOIL, Maracaibo (Venezuela); Soto O, David A. [Texas A and M University, College Station, TX (United States)

2004-07-01

One of the research projects carried out in Cusiana field to explain its rapid decline during the last years was to get better permeability models. The reservoir of this field has a complex layered system that it is not easy to model using conventional methods. The new technique included the development of porosity and permeability maps from cored wells following the same trend of the sand depositions for each facie or layer according to the sedimentary facie and the depositional system models. Then, we used fuzzy logic to reproduce those maps in three dimensions as geologic fuzzy variables. After multivariate statistical and factor analyses, we found independence and a good correlation coefficient between the geologic fuzzy variables and core permeability and porosity. This means, the geologic fuzzy variable could explain the fabric, the grain size and the pore geometry of the reservoir rock trough the field. Finally, we developed a neural network permeability model using porosity, gamma ray and the geologic fuzzy variable as input variables. This model has a cross-correlation coefficient of 0.873 and average absolute error of 33% compared with the actual model with a correlation coefficient of 0.511 and absolute error greater than 250%. We tested different methodologies, but this new one showed dramatically be a promiser way to get better permeability models. The use of the models have had a high impact in the explanation of well performance and workovers, and reservoir simulation models. (author)

13. Confidence Intervals for a Semiparametric Approach to Modeling Nonlinear Relations among Latent Variables

Science.gov (United States)

Pek, Jolynn; Losardo, Diane; Bauer, Daniel J.

2011-01-01

Compared to parametric models, nonparametric and semiparametric approaches to modeling nonlinearity between latent variables have the advantage of recovering global relationships of unknown functional form. Bauer (2005) proposed an indirect application of finite mixtures of structural equation models where latent components are estimated in the…

14. Generalized Density-Corrected Model for Gas Diffusivity in Variably Saturated Soils

DEFF Research Database (Denmark)

Chamindu, Deepagoda; Møldrup, Per; Schjønning, Per

2011-01-01

models. The GDC model was further extended to describe two-region (bimodal) soils and could describe and predict Dp/Do well for both different soil aggregate size fractions and variably compacted volcanic ash soils. A possible use of the new GDC model is engineering applications such as the design...... of highly compacted landfill site caps....

15. BehavePlus fire modeling system, version 5.0: Variables

Science.gov (United States)

Patricia L. Andrews

2009-01-01

This publication has been revised to reflect updates to version 4.0 of the BehavePlus software. It was originally published as the BehavePlus fire modeling system, version 4.0: Variables in July, 2008.The BehavePlus fire modeling system is a computer program based on mathematical models that describe wildland fire behavior and effects and the...

16. Modelling Inter-relationships among water, governance, human development variables in developing countries with Bayesian networks.

Science.gov (United States)

Dondeynaz, C.; Lopez-Puga, J.; Carmona-Moreno, C.

2012-04-01

Improving Water and Sanitation Services (WSS), being a complex and interdisciplinary issue, passes through collaboration and coordination of different sectors (environment, health, economic activities, governance, and international cooperation). This inter-dependency has been recognised with the adoption of the "Integrated Water Resources Management" principles that push for the integration of these various dimensions involved in WSS delivery to ensure an efficient and sustainable management. The understanding of these interrelations appears as crucial for decision makers in the water sector in particular in developing countries where WSS still represent an important leverage for livelihood improvement. In this framework, the Joint Research Centre of the European Commission has developed a coherent database (WatSan4Dev database) containing 29 indicators from environmental, socio-economic, governance and financial aid flows data focusing on developing countries (Celine et al, 2011 under publication). The aim of this work is to model the WatSan4Dev dataset using probabilistic models to identify the key variables influencing or being influenced by the water supply and sanitation access levels. Bayesian Network Models are suitable to map the conditional dependencies between variables and also allows ordering variables by level of influence on the dependent variable. Separated models have been built for water supply and for sanitation because of different behaviour. The models are validated if complying with statistical criteria but either with scientific knowledge and literature. A two steps approach has been adopted to build the structure of the model; Bayesian network is first built for each thematic cluster of variables (e.g governance, agricultural pressure, or human development) keeping a detailed level for interpretation later one. A global model is then built based on significant indicators of each cluster being previously modelled. The structure of the

17. gamboostLSS: An R Package for Model Building and Variable Selection in the GAMLSS Framework

OpenAIRE

Hofner, Benjamin; Mayr, Andreas; Schmid, Matthias

2014-01-01

Generalized additive models for location, scale and shape are a flexible class of regression models that allow to model multiple parameters of a distribution function, such as the mean and the standard deviation, simultaneously. With the R package gamboostLSS, we provide a boosting method to fit these models. Variable selection and model choice are naturally available within this regularized regression framework. To introduce and illustrate the R package gamboostLSS and its infrastructure, we...

18. Coupled variable selection for regression modeling of complex treatment patterns in a clinical cancer registry.

Science.gov (United States)

Schmidtmann, I; Elsäßer, A; Weinmann, A; Binder, H

2014-12-30

For determining a manageable set of covariates potentially influential with respect to a time-to-event endpoint, Cox proportional hazards models can be combined with variable selection techniques, such as stepwise forward selection or backward elimination based on p-values, or regularized regression techniques such as component-wise boosting. Cox regression models have also been adapted for dealing with more complex event patterns, for example, for competing risks settings with separate, cause-specific hazard models for each event type, or for determining the prognostic effect pattern of a variable over different landmark times, with one conditional survival model for each landmark. Motivated by a clinical cancer registry application, where complex event patterns have to be dealt with and variable selection is needed at the same time, we propose a general approach for linking variable selection between several Cox models. Specifically, we combine score statistics for each covariate across models by Fisher's method as a basis for variable selection. This principle is implemented for a stepwise forward selection approach as well as for a regularized regression technique. In an application to data from hepatocellular carcinoma patients, the coupled stepwise approach is seen to facilitate joint interpretation of the different cause-specific Cox models. In conditional survival models at landmark times, which address updates of prediction as time progresses and both treatment and other potential explanatory variables may change, the coupled regularized regression approach identifies potentially important, stably selected covariates together with their effect time pattern, despite having only a small number of events. These results highlight the promise of the proposed approach for coupling variable selection between Cox models, which is particularly relevant for modeling for clinical cancer registries with their complex event patterns. Copyright © 2014 John Wiley & Sons

19. A new general dynamic model predicting radionuclide concentrations and fluxes in coastal areas from readily accessible driving variables

International Nuclear Information System (INIS)

Haakanson, Lars

2004-01-01

This paper presents a general, process-based dynamic model for coastal areas for radionuclides (metals, organics and nutrients) from both single pulse fallout and continuous deposition. The model gives radionuclide concentrations in water (total, dissolved and particulate phases and concentrations in sediments and fish) for entire defined coastal areas. The model gives monthly variations. It accounts for inflow from tributaries, direct fallout to the coastal area, internal fluxes (sedimentation, resuspension, diffusion, burial, mixing and biouptake and retention in fish) and fluxes to and from the sea outside the defined coastal area and/or adjacent coastal areas. The fluxes of water and substances between the sea and the coastal area are differentiated into three categories of coast types: (i) areas where the water exchange is regulated by tidal effects; (ii) open coastal areas where the water exchange is regulated by coastal currents; and (iii) semi-enclosed archipelago coasts. The coastal model gives the fluxes to and from the following four abiotic compartments: surface water, deep water, ET areas (i.e., areas where fine sediment erosion and transport processes dominate the bottom dynamic conditions and resuspension appears) and A-areas (i.e., areas of continuous fine sediment accumulation). Criteria to define the boundaries for the given coastal area towards the sea, and to define whether a coastal area is open or closed are given in operational terms. The model is simple to apply since all driving variables may be readily accessed from maps and standard monitoring programs. The driving variables are: latitude, catchment area, mean annual precipitation, fallout and month of fallout and parameters expressing coastal size and form as determined from, e.g., digitized bathymetric maps using a GIS program. Selected results: the predictions of radionuclide concentrations in water and fish largely depend on two factors, the concentration in the sea outside the given

20. Incorporating Human-like Walking Variability in an HZD-Based Bipedal Model.

Science.gov (United States)

Martin, Anne E; Gregg, Robert D

2016-08-01

Predictive simulations of human walking could be used to investigate a wide range of questions. Promising moderately complex models have been developed using the robotics control technique hybrid zero dynamics (HZD). Existing simulations of human walking only consider the mean motion, so they cannot be used to investigate fall risk, which is correlated with variability. This work determines how to incorporate human-like variability into an HZD-based healthy human model to generate a more realistic gait. The key challenge is determining how to combine the existing mathematical description of variability with the dynamic model so that the biped is still able to walk without falling. To do so, the commanded motion is augmented with a sinusoidal variability function and a polynomial correction function. The variability function captures the variation in joint angles while the correction function prevents the variability function from growing uncontrollably. The necessity of the correction function and the improvements with a reduction of stance ankle variability are demonstrated via simulations. The variability in temporal measures is shown to be similar to experimental values.

1. Evaluation of Stochastic Rainfall Models in Capturing Climate Variability for Future Drought and Flood Risk Assessment

Science.gov (United States)

Chowdhury, A. F. M. K.; Lockart, N.; Willgoose, G. R.; Kuczera, G. A.; Kiem, A.; Nadeeka, P. M.

2016-12-01

One of the key objectives of stochastic rainfall modelling is to capture the full variability of climate system for future drought and flood risk assessment. However, it is not clear how well these models can capture the future climate variability when they are calibrated to Global/Regional Climate Model data (GCM/RCM) as these datasets are usually available for very short future period/s (e.g. 20 years). This study has assessed the ability of two stochastic daily rainfall models to capture climate variability by calibrating them to a dynamically downscaled RCM dataset in an east Australian catchment for 1990-2010, 2020-2040, and 2060-2080 epochs. The two stochastic models are: (1) a hierarchical Markov Chain (MC) model, which we developed in a previous study and (2) a semi-parametric MC model developed by Mehrotra and Sharma (2007). Our hierarchical model uses stochastic parameters of MC and Gamma distribution, while the semi-parametric model uses a modified MC process with memory of past periods and kernel density estimation. This study has generated multiple realizations of rainfall series by using parameters of each model calibrated to the RCM dataset for each epoch. The generated rainfall series are used to generate synthetic streamflow by using a SimHyd hydrology model. Assessing the synthetic rainfall and streamflow series, this study has found that both stochastic models can incorporate a range of variability in rainfall as well as streamflow generation for both current and future periods. However, the hierarchical model tends to overestimate the multiyear variability of wet spell lengths (therefore, is less likely to simulate long periods of drought and flood), while the semi-parametric model tends to overestimate the mean annual rainfall depths and streamflow volumes (hence, simulated droughts are likely to be less severe). Sensitivity of these limitations of both stochastic models in terms of future drought and flood risk assessment will be discussed.

2. Model Predictive Control of a Nonlinear System with Known Scheduling Variable

DEFF Research Database (Denmark)

Mirzaei, Mahmood; Poulsen, Niels Kjølstad; Niemann, Hans Henrik

2012-01-01

Model predictive control (MPC) of a class of nonlinear systems is considered in this paper. We will use Linear Parameter Varying (LPV) model of the nonlinear system. By taking the advantage of having future values of the scheduling variable, we will simplify state prediction. Consequently...... the control problem of the nonlinear system is simplied into a quadratic programming. Wind turbine is chosen as the case study and we choose wind speed as the scheduling variable. Wind speed is measurable ahead of the turbine, therefore the scheduling variable is known for the entire prediction horizon....

3. Mathematical modeling and design parameters of crushing machines with variable-pitch helix of the screw

Directory of Open Access Journals (Sweden)

Pelenko V. V.

2017-11-01

Full Text Available From the point of view of the effectiveness of the top cutting unit, the helix angle in the end portion of the screw is the most important and characteristic parameter, as it determines the pressure of the meat material in the zone of interaction of a knife and grate. The importance of solving the problem of mathematical modeling of geometry is due to the need to address the problem of minimizing the reverse flow of the food material when injecting into the cutting zone, as the specified effect of "locking" significantly reduces the performance of the transfer process, increases energy consumption of the equipment and entails the deterioration of the quality of the raw materials output. The problem of determining the length of the helix variable pitch for screw chopper food materials has been formulated and solved by methods of differential geometry. The task of correct description of the law of changing the angle of helix inclination along its length has been defined in this case as a key to provide the required dependence of this angle tangent on the angle of the radius-vector of the circle. It has been taken into account that the reduction in the pitch of the screw in the direction of the product delivery should occur at a decreasing rate. The parametric equation of the helix has been written in the form of three functional dependencies of the corresponding cylindrical coordinates. Based on the wide range analysis and significant number of models of tops from different manufacturers the boundaries of possible changes in the angles of inclination of the helical line of the first and last turns of the screw have been identified. The auger screw length is determined mathematically in the form of an analytical relationship and both as a function of the variable angle of its rise, and as a function of the rotation angle of the radius-vector of the circle generatrix, which makes it possible to expand the design possibilities of this node. Along

4. The selection of a mode of urban transportation: Integrating psychological variables to discrete choice models

International Nuclear Information System (INIS)

Cordoba Maquilon, Jorge E; Gonzalez Calderon, Carlos A; Posada Henao, John J

2011-01-01

A study using revealed preference surveys and psychological tests was conducted. Key psychological variables of behavior involved in the choice of transportation mode in a population sample of the Metropolitan Area of the Valle de Aburra were detected. The experiment used the random utility theory for discrete choice models and reasoned action in order to assess beliefs. This was used as a tool for analysis of the psychological variables using the sixteen personality factor questionnaire (16PF test). In addition to the revealed preference surveys, two other surveys were carried out: one with socio-economic characteristics and the other with latent indicators. This methodology allows for an integration of discrete choice models and latent variables. The integration makes the model operational and quantifies the unobservable psychological variables. The most relevant result obtained was that anxiety affects the choice of urban transportation mode and shows that physiological alterations, as well as problems in perception and beliefs, can affect the decision-making process.

5. Bayesian Variable Selection in Multilevel Item Response Theory Models with Application in Genomics.

Science.gov (United States)

Fragoso, Tiago M; de Andrade, Mariza; Pereira, Alexandre C; Rosa, Guilherme J M; Soler, Júlia M P

2016-04-01

The goal of this paper is to present an implementation of stochastic search variable selection (SSVS) to multilevel model from item response theory (IRT). As experimental settings get more complex and models are required to integrate multiple (and sometimes massive) sources of information, a model that can jointly summarize and select the most relevant characteristics can provide better interpretation and a deeper insight into the problem. A multilevel IRT model recently proposed in the literature for modeling multifactorial diseases is extended to perform variable selection in the presence of thousands of covariates using SSVS. We derive conditional distributions required for such a task as well as an acceptance-rejection step that allows for the SSVS in high dimensional settings using a Markov Chain Monte Carlo algorithm. We validate the variable selection procedure through simulation studies, and illustrate its application on a study with genetic markers associated with the metabolic syndrome. © 2016 WILEY PERIODICALS, INC.

6. A Model of Twice-Exceptionality: Explaining and Defining the Apparent Paradoxical Combination of Disability and Giftedness in Childhood

Science.gov (United States)

Ronksley-Pavia, Michelle

2015-01-01

The literature on twice-exceptionality suggests one of the main problems facing twice-exceptional children is that there is no consensus on the definition of the terms "disability" or "giftedness" and, consequently, the term "twice-exceptional". Endeavoring to define these specific terms loops back on itself to…

7. Importance analysis for models with correlated variables and its sparse grid solution

International Nuclear Information System (INIS)

Li, Luyi; Lu, Zhenzhou

2013-01-01

For structural models involving correlated input variables, a novel interpretation for variance-based importance measures is proposed based on the contribution of the correlated input variables to the variance of the model output. After the novel interpretation of the variance-based importance measures is compared with the existing ones, two solutions of the variance-based importance measures of the correlated input variables are built on the sparse grid numerical integration (SGI): double-loop nested sparse grid integration (DSGI) method and single loop sparse grid integration (SSGI) method. The DSGI method solves the importance measure by decreasing the dimensionality of the input variables procedurally, while SSGI method performs importance analysis through extending the dimensionality of the inputs. Both of them can make full use of the advantages of the SGI, and are well tailored for different situations. By analyzing the results of several numerical and engineering examples, it is found that the novel proposed interpretation about the importance measures of the correlated input variables is reasonable, and the proposed methods for solving importance measures are efficient and accurate. -- Highlights: •The contribution of correlated variables to the variance of the output is analyzed. •A novel interpretation for variance-based indices of correlated variables is proposed. •Two solutions for variance-based importance measures of correlated variables are built

8. Variability of concrete properties: experimental characterisation and probabilistic modelling for calcium leaching

International Nuclear Information System (INIS)

De Larrard, Th.

2010-09-01

Evaluating structures durability requires taking into account the variability of material properties. The thesis has two main aspects: on the one hand, an experimental campaign aimed at quantifying the variability of many indicators of concrete behaviour; on the other hand, a simple numerical model for calcium leaching is developed in order to implement probabilistic methods so as to estimate the lifetime of structures such as those related to radioactive waste disposal. The experimental campaign consisted in following up two real building sites, and quantifying the variability of these indicators, studying their correlation, and characterising the random fields variability for the considered variables (especially the correlation length). To draw any conclusion from the accelerated leaching tests with ammonium nitrate by overcoming the effects of temperature, an inverse analysis tool based on the theory of artificial neural networks was developed. Simple numerical tools are presented to investigate the propagation of variability in durability issues, quantify the influence of this variability on the lifespan of structures and explain the variability of the input parameters of the numerical model and the physical measurable quantities of the material. (author)

9. A simple model for the spatially-variable coastal response to hurricanes

Science.gov (United States)

Stockdon, H.F.; Sallenger, A.H.; Holman, R.A.; Howd, P.A.

2007-01-01

The vulnerability of a beach to extreme coastal change during a hurricane can be estimated by comparing the relative elevations of storm-induced water levels to those of the dune or berm. A simple model that defines the coastal response based on these elevations was used to hindcast the potential impact regime along a 50-km stretch of the North Carolina coast to the landfalls of Hurricane Bonnie on August 27, 1998, and Hurricane Floyd on September 16, 1999. Maximum total water levels at the shoreline were calculated as the sum of modeled storm surge, astronomical tide, and wave runup, estimated from offshore wave conditions and the local beach slope using an empirical parameterization. Storm surge and wave runup each accounted for ∼ 48% of the signal (the remaining 4% is attributed to astronomical tides), indicating that wave-driven process are a significant contributor to hurricane-induced water levels. Expected water levels and lidar-derived measures of pre-storm dune and berm elevation were used to predict the spatially-varying storm-impact regime: swash, collision, or overwash. Predictions were compared to the observed response quantified using a lidar topography survey collected following hurricane landfall. The storm-averaged mean accuracy of the model in predicting the observed impact regime was 55.4%, a significant improvement over the 33.3% accuracy associated with random chance. Model sensitivity varied between regimes and was highest within the overwash regime where the accuracies were 84.2% and 89.7% for Hurricanes Bonnie and Floyd, respectively. The model not only allows for prediction of the general coastal response to storms, but also provides a framework for examining the longshore-variable magnitudes of observed coastal change. For Hurricane Bonnie, shoreline and beach volume changes within locations that experienced overwash or dune erosion were two times greater than locations where wave runup was confined to the foreshore (swash regime

10. Seasonal variability of salinity and circulation in a silled estuarine fjord: A numerical model study

Science.gov (United States)

Kawase, Mitsuhiro; Bang, Bohyun

2013-12-01

A three-dimensional hydrodynamic model is used to study seasonal variability of circulation and hydrography in Hood Canal, Washington, United States, an estuarine fjord that develops seasonally hypoxic conditions. The model is validated with data from year 2006, and is shown to be capable of quantitatively realistic simulation of hydrographic variability. Sensitivity experiments show the largest cause of seasonal variability to be that of salinity at the mouth of the fjord, which drives an annual deep water renewal in late summer-early autumn. Variability of fresh water input from the watershed also causes significant but secondary changes, especially in winter. Local wind stress has little effect over the seasonal timescale. Further experiments, in which one forcing parameter is abruptly altered while others are kept constant, show that outside salinity change induces an immediate response in the exchange circulation that, however, decays as a transient as the system equilibrates. In contrast, a change in the river input initiates gradual adjustment towards a new equilibrium value for the exchange transport. It is hypothesized that the spectral character of the system response to river variability will be redder than to salinity variability. This is demonstrated with a stochastically forced, semi-analytical model of fjord exchange circulation. While the exchange circulation in Hood Canal appears less sensitive to the river variability than to the outside hydrography at seasonal timescales, at decadal and longer timescales both could become significant factors in affecting the exchange circulation.

11. Variability in respiratory rhythm generation: In vitro and in silico models

Science.gov (United States)

Fietkiewicz, Christopher; Shafer, Geoffrey O.; Platt, Ethan A.; Wilson, Christopher G.

2016-03-01

The variability inherent in physiological rhythms is disruptive in extremis (too great or too little) but may also serve a functional and important role in homeostatic systems. Here we focus on the neural control of respiration which is critical for survival in many animals. The overall respiratory control system is comprised of multiple nuclei, each of which may have different contributions to rhythm variability. We focused on the pre-Bötzinger complex (preBötC) which is unique in that it can be studied in vitro as an isolated nucleus with autorhythmic behavior. The in vitro results show a bounded range of variability in which the upper and lower limits are functions of the respiratory rate. In addition, the correlation between variability and respiratory rate changes during development. We observed a weaker correlation in younger animals (0-3 days old) as compared to older animals (4-5 days old). Based on experimental observations, we developed a computational model that can be embedded in more comprehensive models of respiratory and cardiovascular autonomic control. Our simulation results successfully reproduce the variability we observed experimentally. The in silico model suggests that age-dependent variability may be due to a developmental increase in mean synaptic conductance between preBötC neurons. We also used simulations to explore the effects of stochastic spiking in sensory relay neurons. Our results suggest that stochastic spiking may actually stabilize modulation of both respiratory rate and its variability when the rate changes due to physiological demand.

12. Incorporating Latent Variables into Discrete Choice Models - A Simultaneous Estimation Approach Using SEM Software

Directory of Open Access Journals (Sweden)

Dirk Temme

2008-12-01

Full Text Available Integrated choice and latent variable (ICLV models represent a promising new class of models which merge classic choice models with the structural equation approach (SEM for latent variables. Despite their conceptual appeal, applications of ICLV models in marketing remain rare. We extend previous ICLV applications by first estimating a multinomial choice model and, second, by estimating hierarchical relations between latent variables. An empirical study on travel mode choice clearly demonstrates the value of ICLV models to enhance the understanding of choice processes. In addition to the usually studied directly observable variables such as travel time, we show how abstract motivations such as power and hedonism as well as attitudes such as a desire for flexibility impact on travel mode choice. Furthermore, we show that it is possible to estimate such a complex ICLV model with the widely available structural equation modeling package Mplus. This finding is likely to encourage more widespread application of this appealing model class in the marketing field.

13. Beyond a climate-centric view of plant distribution: edaphic variables add value to distribution models.

Directory of Open Access Journals (Sweden)

Frieda Beauregard

Full Text Available Both climatic and edaphic conditions determine plant distribution, however many species distribution models do not include edaphic variables especially over large geographical extent. Using an exceptional database of vegetation plots (n = 4839 covering an extent of ∼55,000 km2, we tested whether the inclusion of fine scale edaphic variables would improve model predictions of plant distribution compared to models using only climate predictors. We also tested how well these edaphic variables could predict distribution on their own, to evaluate the assumption that at large extents, distribution is governed largely by climate. We also hypothesized that the relative contribution of edaphic and climatic data would vary among species depending on their growth forms and biogeographical attributes within the study area. We modelled 128 native plant species from diverse taxa using four statistical model types and three sets of abiotic predictors: climate, edaphic, and edaphic-climate. Model predictive accuracy and variable importance were compared among these models and for species' characteristics describing growth form, range boundaries within the study area, and prevalence. For many species both the climate-only and edaphic-only models performed well, however the edaphic-climate models generally performed best. The three sets of predictors differed in the spatial information provided about habitat suitability, with climate models able to distinguish range edges, but edaphic models able to better distinguish within-range variation. Model predictive accuracy was generally lower for species without a range boundary within the study area and for common species, but these effects were buffered by including both edaphic and climatic predictors. The relative importance of edaphic and climatic variables varied with growth forms, with trees being more related to climate whereas lower growth forms were more related to edaphic conditions. Our study

14. Effects of environmental variables on invasive amphibian activity: Using model selection on quantiles for counts

Science.gov (United States)

Muller, Benjamin J.; Cade, Brian S.; Schwarzkoph, Lin

2018-01-01

Many different factors influence animal activity. Often, the value of an environmental variable may influence significantly the upper or lower tails of the activity distribution. For describing relationships with heterogeneous boundaries, quantile regressions predict a quantile of the conditional distribution of the dependent variable. A quantile count model extends linear quantile regression methods to discrete response variables, and is useful if activity is quantified by trapping, where there may be many tied (equal) values in the activity distribution, over a small range of discrete values. Additionally, different environmental variables in combination may have synergistic or antagonistic effects on activity, so examining their effects together, in a modeling framework, is a useful approach. Thus, model selection on quantile counts can be used to determine the relative importance of different variables in determining activity, across the entire distribution of capture results. We conducted model selection on quantile count models to describe the factors affecting activity (numbers of captures) of cane toads (Rhinella marina) in response to several environmental variables (humidity, temperature, rainfall, wind speed, and moon luminosity) over eleven months of trapping. Environmental effects on activity are understudied in this pest animal. In the dry season, model selection on quantile count models suggested that rainfall positively affected activity, especially near the lower tails of the activity distribution. In the wet season, wind speed limited activity near the maximum of the distribution, while minimum activity increased with minimum temperature. This statistical methodology allowed us to explore, in depth, how environmental factors influenced activity across the entire distribution, and is applicable to any survey or trapping regime, in which environmental variables affect activity.

15. Assessing geotechnical centrifuge modelling in addressing variably saturated flow in soil and fractured rock.

Science.gov (United States)

Jones, Brendon R; Brouwers, Luke B; Van Tonder, Warren D; Dippenaar, Matthys A

2017-05-01

The vadose zone typically comprises soil underlain by fractured rock. Often, surface water and groundwater parameters are readily available, but variably saturated flow through soil and rock are oversimplified or estimated as input for hydrological models. In this paper, a series of geotechnical centrifuge experiments are conducted to contribute to the knowledge gaps in: (i) variably saturated flow and dispersion in soil and (ii) variably saturated flow in discrete vertical and horizontal fractures. Findings from the research show that the hydraulic gradient, and not the hydraulic conductivity, is scaled for seepage flow in the geotechnical centrifuge. Furthermore, geotechnical centrifuge modelling has been proven as a viable experimental tool for the modelling of hydrodynamic dispersion as well as the replication of similar flow mechanisms for unsaturated fracture flow, as previously observed in literature. Despite the imminent challenges of modelling variable saturation in the vadose zone, the geotechnical centrifuge offers a powerful experimental tool to physically model and observe variably saturated flow. This can be used to give valuable insight into mechanisms associated with solid-fluid interaction problems under these conditions. Findings from future research can be used to validate current numerical modelling techniques and address the subsequent influence on aquifer recharge and vulnerability, contaminant transport, waste disposal, dam construction, slope stability and seepage into subsurface excavations.

16. Comparisons of model simulations of climate variability with data, Task 2. [Progress report

Energy Technology Data Exchange (ETDEWEB)

1990-12-31

Significant progress has been made in our investigations aimed at diagnosing low frequency variations of climate in General Circulation Models. We have analyzed three versions of the Oregon State University General Circulation Model (OSU GCM). These are: (1) the Slab Model in which the ocean is treated as a static heat reservoir of fixed depth, (2) the coupled upper ocean-atmosphere model in which the ocean dynamics are calculated in two layers of variable depths representing the mixed layers and the thermocline; this model is referred to OSU2 in the following discussion, and (3) the coupled full ocean-atmosphere model in which the ocean is represented by six layers of variable depth; this model is referred to as OSU6 GCM in the discussion.

17. Comparisons of model simulations of climate variability with data, Task 2

Energy Technology Data Exchange (ETDEWEB)

1990-01-01

Significant progress has been made in our investigations aimed at diagnosing low frequency variations of climate in General Circulation Models. We have analyzed three versions of the Oregon State University General Circulation Model (OSU GCM). These are: (1) the Slab Model in which the ocean is treated as a static heat reservoir of fixed depth, (2) the coupled upper ocean-atmosphere model in which the ocean dynamics are calculated in two layers of variable depths representing the mixed layers and the thermocline; this model is referred to OSU2 in the following discussion, and (3) the coupled full ocean-atmosphere model in which the ocean is represented by six layers of variable depth; this model is referred to as OSU6 GCM in the discussion.

18. Parameter estimation of variable-parameter nonlinear Muskingum model using excel solver

Science.gov (United States)

Kang, Ling; Zhou, Liwei

2018-02-01

Abstract . The Muskingum model is an effective flood routing technology in hydrology and water resources Engineering. With the development of optimization technology, more and more variable-parameter Muskingum models were presented to improve effectiveness of the Muskingum model in recent decades. A variable-parameter nonlinear Muskingum model (NVPNLMM) was proposed in this paper. According to the results of two real and frequently-used case studies by various models, the NVPNLMM could obtain better values of evaluation criteria, which are used to describe the superiority of the estimated outflows and compare the accuracies of flood routing using various models, and the optimal estimated outflows by the NVPNLMM were closer to the observed outflows than the ones by other models.

19. Bayesian Analysis of Linear and Nonlinear Latent Variable Models with Fixed Covariate and Ordered Categorical Data

Directory of Open Access Journals (Sweden)

Thanoon Y. Thanoon

2016-03-01

Full Text Available In this paper, ordered categorical variables are used to compare between linear and nonlinear interactions of fixed covariate and latent variables Bayesian structural equation models. Gibbs sampling method is applied for estimation and model comparison. Hidden continuous normal distribution (censored normal distribution is used to handle the problem of ordered categorical data. Statistical inferences, which involve estimation of parameters and their standard deviations, and residuals analyses for testing the selected model, are discussed. The proposed procedure is illustrated by a simulation data obtained from R program. Analysis are done by using OpenBUGS program.

20. Studies and research concerning BNFP. Identification and simplified modeling of economically important radwaste variables

International Nuclear Information System (INIS)

Ebel, P.E.; Godfrey, W.L.; Henry, J.L.; Postles, R.L.

1983-09-01

An extensive computer model describing the mass balance and economic characteristics of radioactive waste disposal systems was exercised in a series of runs designed using linear statistical methods. The most economically important variables were identified, their behavior characterized, and a simplified computer model prepared which runs on desk-top minicomputers. This simplified model allows the investigation of the effects of the seven most significant variables in each of four waste areas: Liquid Waste Storage, Liquid Waste Solidification, General Process Trash Handling, and Hulls Handling. 8 references, 1 figure, 12 tables

1. Using a 1-D model to reproduce the diurnal variability of SST

DEFF Research Database (Denmark)

Karagali, Ioanna; Høyer, Jacob L.; Donlon, Craig J.

2017-01-01

preferred approach to bridge the gap between in situ and remotely sensed measurements and obtain diurnal warming estimates at large spatial scales is modeling of the upper ocean temperature. This study uses the one-dimensional General Ocean Turbulence Model (GOTM) to resolve diurnal signals identified from...... forcing fields and is able to resolve daily SST variability seen both from satellite and in situ measurements. As such, and due to its low computational cost, it is proposed as a candidate model for diurnal variability estimates....

2. MODELING OF RELATIONSHIP BETWEEN GROUNDWATER FLOW AND OTHER METEOROLOGICAL VARIABLES USING FUZZY LOGIC

Directory of Open Access Journals (Sweden)

Şaban YURTÇU

2006-02-01

Full Text Available In this study, modeling of the effect of rainfall, flow and evaporation as independent variables on the change of underground water levels as dependent variables were investigated by fuzzy logic (FL. In the study, total 396 values taken from six observation stations belong to Afyon inferior basin in Akarçay from 1977 to 1989 years were used. Using the monthly average values of stations, the change of underground water level was modeled by FL. It is observed that the results obtained from FL and the observations are compatible with each other. This shows FL modeling can be used to estimate groundwater levels from the appropriate meteorological value.

3. Bayesian leave-one-out cross-validation approximations for Gaussian latent variable models

DEFF Research Database (Denmark)

Vehtari, Aki; Mononen, Tommi; Tolvanen, Ville

2016-01-01

The future predictive performance of a Bayesian model can be estimated using Bayesian cross-validation. In this article, we consider Gaussian latent variable models where the integration over the latent values is approximated using the Laplace method or expectation propagation (EP). We study the ...

4. Estimating structural equation models with non-normal variables by using transformations

NARCIS (Netherlands)

Montfort, van K.; Mooijaart, A.; Meijerink, F.

2009-01-01

We discuss structural equation models for non-normal variables. In this situation the maximum likelihood and the generalized least-squares estimates of the model parameters can give incorrect estimates of the standard errors and the associated goodness-of-fit chi-squared statistics. If the sample

5. The Matrix model, a driven state variables approach to non-equilibrium thermodynamics

NARCIS (Netherlands)

Jongschaap, R.J.J.

2001-01-01

One of the new approaches in non-equilibrium thermodynamics is the so-called matrix model of Jongschaap. In this paper some features of this model are discussed. We indicate the differences with the more common approach based upon internal variables and the more sophisticated Hamiltonian and GENERIC

6. An improved car-following model considering variable safety headway distance

Science.gov (United States)

Jia, Yu-han; Du, Yi-man; Wu, Jian-ping

2014-12-01

Considering high speed following on expressway or highway, an improved car-following model is developed in this paper by introducing variable safety headway distance. Stability analysis of the new model is carried out using the control theory method. Finally, numerical simulations are implemented and the results show good consistency with theoretical study.

7. UAH mathematical model of the variable polarity plasma ARC welding system calculation

Science.gov (United States)

Hung, R. J.

1994-01-01

Significant advantages of Variable Polarity Plasma Arc (VPPA) welding process include faster welding, fewer repairs, less joint preparation, reduced weldment distortion, and absence of porosity. A mathematical model is presented to analyze the VPPA welding process. Results of the mathematical model were compared with the experimental observation accomplished by the GDI team.

8. Robust Model Predictive Control of a Nonlinear System with Known Scheduling Variable and Uncertain Gain

DEFF Research Database (Denmark)

Mirzaei, Mahmood; Poulsen, Niels Kjølstad; Niemann, Hans Henrik

2012-01-01

Robust model predictive control (RMPC) of a class of nonlinear systems is considered in this paper. We will use Linear Parameter Varying (LPV) model of the nonlinear system. By taking the advantage of having future values of the scheduling variable, we will simplify state prediction. Because...... of the special structure of the problem, uncertainty is only in the B matrix (gain) of the state space model. Therefore by taking advantage of this structure, we formulate a tractable minimax optimization problem to solve robust model predictive control problem. Wind turbine is chosen as the case study and we...... choose wind speed as the scheduling variable. Wind speed is measurable ahead of the turbine, therefore the scheduling variable is known for the entire prediction horizon....

9. Universal solvation model based on solute electron density and on a continuum model of the solvent defined by the bulk dielectric constant and atomic surface tensions.

Science.gov (United States)

Marenich, Aleksandr V; Cramer, Christopher J; Truhlar, Donald G

2009-05-07

We present a new continuum solvation model based on the quantum mechanical charge density of a solute molecule interacting with a continuum description of the solvent. The model is called SMD, where the "D" stands for "density" to denote that the full solute electron density is used without defining partial atomic charges. "Continuum" denotes that the solvent is not represented explicitly but rather as a dielectric medium with surface tension at the solute-solvent boundary. SMD is a universal solvation model, where "universal" denotes its applicability to any charged or uncharged solute in any solvent or liquid medium for which a few key descriptors are known (in particular, dielectric constant, refractive index, bulk surface tension, and acidity and basicity parameters). The model separates the observable solvation free energy into two main components. The first component is the bulk electrostatic contribution arising from a self-consistent reaction field treatment that involves the solution of the nonhomogeneous Poisson equation for electrostatics in terms of the integral-equation-formalism polarizable continuum model (IEF-PCM). The cavities for the bulk electrostatic calculation are defined by superpositions of nuclear-centered spheres. The second component is called the cavity-dispersion-solvent-structure term and is the contribution arising from short-range interactions between the solute and solvent molecules in the first solvation shell. This contribution is a sum of terms that are proportional (with geometry-dependent proportionality constants called atomic surface tensions) to the solvent-accessible surface areas of the individual atoms of the solute. The SMD model has been parametrized with a training set of 2821 solvation data including 112 aqueous ionic solvation free energies, 220 solvation free energies for 166 ions in acetonitrile, methanol, and dimethyl sulfoxide, 2346 solvation free energies for 318 neutral solutes in 91 solvents (90 nonaqueous

10. Modeling and fabrication of an RF MEMS variable capacitor with a fractal geometry

KAUST Repository

Elshurafa, Amro M.

2013-08-16

In this paper, we model, fabricate, and measure an electrostatically actuated MEMS variable capacitor that utilizes a fractal geometry and serpentine-like suspension arms. Explicitly, a variable capacitor that possesses a top suspended plate with a specific fractal geometry and also possesses a bottom fixed plate complementary in shape to the top plate has been fabricated in the PolyMUMPS process. An important benefit that was achieved from using the fractal geometry in designing the MEMS variable capacitor is increasing the tuning range of the variable capacitor beyond the typical ratio of 1.5. The modeling was carried out using the commercially available finite element software COMSOL to predict both the tuning range and pull-in voltage. Measurement results show that the tuning range is 2.5 at a maximum actuation voltage of 10V.

11. Towards multi-resolution global climate modeling with ECHAM6-FESOM. Part II: climate variability

Science.gov (United States)

Rackow, T.; Goessling, H. F.; Jung, T.; Sidorenko, D.; Semmler, T.; Barbi, D.; Handorf, D.

2018-04-01

This study forms part II of two papers describing ECHAM6-FESOM, a newly established global climate model with a unique multi-resolution sea ice-ocean component. While part I deals with the model description and the mean climate state, here we examine the internal climate variability of the model under constant present-day (1990) conditions. We (1) assess the internal variations in the model in terms of objective variability performance indices, (2) analyze variations in global mean surface temperature and put them in context to variations in the observed record, with particular emphasis on the recent warming slowdown, (3) analyze and validate the most common atmospheric and oceanic variability patterns, (4) diagnose the potential predictability of various climate indices, and (5) put the multi-resolution approach to the test by comparing two setups that differ only in oceanic resolution in the equatorial belt, where one ocean mesh keeps the coarse 1° resolution applied in the adjacent open-ocean regions and the other mesh is gradually refined to 0.25°. Objective variability performance indices show that, in the considered setups, ECHAM6-FESOM performs overall favourably compared to five well-established climate models. Internal variations of the global mean surface temperature in the model are consistent with observed fluctuations and suggest that the recent warming slowdown can be explained as a once-in-one-hundred-years event caused by internal climate variability; periods of strong cooling in the model (`hiatus' analogs) are mainly associated with ENSO-related variability and to a lesser degree also to PDO shifts, with the AMO playing a minor role. Common atmospheric and oceanic variability patterns are simulated largely consistent with their real counterparts. Typical deficits also found in other models at similar resolutions remain, in particular too weak non-seasonal variability of SSTs over large parts of the ocean and episodic periods of almost absent

12. A Correlation-Based Transition Model using Local Variables. Part 2; Test Cases and Industrial Applications

Science.gov (United States)

Langtry, R. B.; Menter, F. R.; Likki, S. R.; Suzen, Y. B.; Huang, P. G.; Volker, S.

2006-01-01

A new correlation-based transition model has been developed, which is built strictly on local variables. As a result, the transition model is compatible with modern computational fluid dynamics (CFD) methods using unstructured grids and massive parallel execution. The model is based on two transport equations, one for the intermittency and one for the transition onset criteria in terms of momentum thickness Reynolds number. The proposed transport equations do not attempt to model the physics of the transition process (unlike, e.g., turbulence models), but form a framework for the implementation of correlation-based models into general-purpose CFD methods.

13. Fractional derivatives of constant and variable orders applied to anomalous relaxation models in heat transfer problems

Directory of Open Access Journals (Sweden)

Yang Xiao-Jun

2017-01-01

Full Text Available In this paper, we address a class of the fractional derivatives of constant and variable orders for the first time. Fractional-order relaxation equations of constants and variable orders in the sense of Caputo type are modeled from mathematical view of point. The comparative results of the anomalous relaxation among the various fractional derivatives are also given. They are very efficient in description of the complex phenomenon arising in heat transfer.

14. Evaluation of a constitutive model for unsaturated soils: stress variables and numerical implementation

OpenAIRE

González, Nubia Aurora; Gens Solé, Antonio

2010-01-01

This paper presents an evaluation of a constitutive model for unsaturated soils based on the BBM (Alonso et al. 1990). The focus of the paper is on the stress variables used and on the numerical algorithms adopted. Conventional stress variable approach (net stress and suction) as well as the approach that takes into account the degree of saturation (Bishop’s stress and suction) are examined. To solve the constitutive stress–strain equations, two stress integration procedures have been impleme...

15. On the intra-seasonal variability within the extratropics in the ECHAM3 general circulation model

International Nuclear Information System (INIS)

May, W.

1994-01-01

First we consider the GCM's capability to reproduce the midlatitude variability on intra-seasonal time scales by a comparison with observational data (ECMWF analyses). Secondly we assess the possible influence of Sea Surface Temperatures on the intra-seasonal variability by comparing estimates obtained from different simulations performed with ECHAM3 with varying and fixed SST as boundary forcing. The intra-seasonal variability as simulated by ECHAM3 is underestimated over most of the Northern Hemisphere. While the contributions of the high-frequency transient fluctuations are reasonably well captured by the model, ECHAM3 fails to reproduce the observed level of low-frequency intra-seasonal variability. This is mainly due to the underestimation of the variability caused by the ultra-long planetary waves in the Northern Hemisphere midlatitudes by the model. In the Southern Hemisphere midlatitudes, on the other hand, the intra-seasonal variability as simulated by ECHAM3 is generally underestimated in the area north of about 50 southern latitude, but overestimated at higher latitudes. This is the case for the contributions of the high-frequency and the low-frequency transient fluctuations as well. Further, the model indicates a strong tendency for zonal symmetry, in particular with respect to the high-frequency transient fluctuations. While the two sets of simulations with varying and fixed Sea Surface Temepratures as boundary forcing reveal only small regional differences in the Southern Hemisphere, there is a strong response to be found in the Northern Hemisphere. The contributions of the high-frequency transient fluctuations to the intra-seasonal variability are generally stronger in the simulations with fixed SST. Further, the Pacific storm track is shifted slightly poleward in this set of simulations. For the low-frequency intra-seasonal variability the model gives a strong, but regional response to the interannual variations of the SST. (orig.)

16. Variables influencing the use of derivatives in South Africa – the development of a conceptual model

Directory of Open Access Journals (Sweden)

Stefan Schwegler

2011-03-01

Full Text Available This paper, which is the first in a two-part series, sets out the development of a conceptual model on the variables influencing investors’ decisions to use derivatives in their portfolios. Investor-specific variables include: the investor’s needs, goals and return expectations, the investor’s knowledge of financial markets, familiarity with different asset classes including derivative instruments, and the investor’s level of wealth and level of risk tolerance. Market-specific variables include: the level of volatility, standardisation, regulation and liquidity in a market, the level of information available on derivatives, the transparency of price determination, taxes, brokerage costs and product availability.

17. Simulation model structure numerically robust to changes in magnitude and combination of input and output variables

DEFF Research Database (Denmark)

Rasmussen, Bjarne D.; Jakobsen, Arne

1999-01-01

Mathematical models of refrigeration systems are often based on a coupling of component models forming a “closed loop” type of system model. In these models the coupling structure of the component models represents the actual flow path of refrigerant in the system. Very often numerical...... variables with narrow definition intervals for the exchange of information between the cycle model and the component models.The advantages of the cycle-oriented method are illustrated by an example showing the refrigeration cycle similarities between two very different refrigeration systems....... instabilities prevent the practical use of such a system model for more than one input/output combination and for other magnitudes of refrigerating capacities.A higher numerical robustness of system models can be achieved by making a model for the refrigeration cycle the core of the system model and by using...

18. A new approach to hazardous materials transportation risk analysis: decision modeling to identify critical variables.

Science.gov (United States)

Clark, Renee M; Besterfield-Sacre, Mary E

2009-03-01

We take a novel approach to analyzing hazardous materials transportation risk in this research. Previous studies analyzed this risk from an operations research (OR) or quantitative risk assessment (QRA) perspective by minimizing or calculating risk along a transport route. Further, even though the majority of incidents occur when containers are unloaded, the research has not focused on transportation-related activities, including container loading and unloading. In this work, we developed a decision model of a hazardous materials release during unloading using actual data and an exploratory data modeling approach. Previous studies have had a theoretical perspective in terms of identifying and advancing the key variables related to this risk, and there has not been a focus on probability and statistics-based approaches for doing this. Our decision model empirically identifies the critical variables using an exploratory methodology for a large, highly categorical database involving latent class analysis (LCA), loglinear modeling, and Bayesian networking. Our model identified the most influential variables and countermeasures for two consequences of a hazmat incident, dollar loss and release quantity, and is one of the first models to do this. The most influential variables were found to be related to the failure of the container. In addition to analyzing hazmat risk, our methodology can be used to develop data-driven models for strategic decision making in other domains involving risk.

19. Phylogenetic tree reconstruction accuracy and model fit when proportions of variable sites change across the tree.

Science.gov (United States)

Shavit Grievink, Liat; Penny, David; Hendy, Michael D; Holland, Barbara R

2010-05-01

Commonly used phylogenetic models assume a homogeneous process through time in all parts of the tree. However, it is known that these models can be too simplistic as they do not account for nonhomogeneous lineage-specific properties. In particular, it is now widely recognized that as constraints on sequences evolve, the proportion and positions of variable sites can vary between lineages causing heterotachy. The extent to which this model misspecification affects tree reconstruction is still unknown. Here, we evaluate the effect of changes in the proportions and positions of variable sites on model fit and tree estimation. We consider 5 current models of nucleotide sequence evolution in a Bayesian Markov chain Monte Carlo framework as well as maximum parsimony (MP). We show that for a tree with 4 lineages where 2 nonsister taxa undergo a change in the proportion of variable sites tree reconstruction under the best-fitting model, which is chosen using a relative test, often results in the wrong tree. In this case, we found that an absolute test of model fit is a better predictor of tree estimation accuracy. We also found further evidence that MP is not immune to heterotachy. In addition, we show that increased sampling of taxa that have undergone a change in proportion and positions of variable sites is critical for accurate tree reconstruction.

20. Modeling and analysis of gear tooth crack growth under variable-amplitude loading

Science.gov (United States)

Yin, Juliang; Wang, Wenyi; Man, Zhihong; Khoo, Suiyang

2013-10-01

The purpose of this paper is to reveal the pattern of gear tooth crack growth under variable-amplitude loading. To this end, a nonlinear dynamic model is proposed to describe the gear tooth crack growth. The state variables of the model are crack length and crack opening stress. The dynamics of crack growth is modeled as a modified Paris equation based on the concept of crack closure. A nonlinear second-order autoregressive equation is developed to model the dynamic behavior of the crack opening stresses. The model parameters are estimated by means of a two-step estimation method because of relatively small sample size of crack length data for G6 gear tests. The model is also validated with the crack growth data of the G6 gear.

1. Use of the D-R model to define trends in the emergence of Ceftazidime-resistant Escherichia coli in China

Science.gov (United States)

Objective: To assess the efficacy of the D-R model for defining trends in the appearance of Ceftazidime-resistant Escherichia coli. Methods: Actual data related to the manifestation of Ceftazidime-resistant E.coli spanning years 1996-2009 were collected from the China National Knowledge Internet (CN...

2. Use of variability modes to evaluate AR4 climate models over the Euro-Atlantic region

Energy Technology Data Exchange (ETDEWEB)

2012-01-15

This paper analyzes the ability of the multi-model simulations from the Fourth Assessment Report of the Intergovernmental Panel on Climate Change (IPCC) to simulate the main leading modes of variability over the Euro-Atlantic region in winter: the North-Atlantic Oscillation (NAO), the Scandinavian mode (SCAND), the East/Atlantic Oscillation (EA) and the East Atlantic/Western Russia mode (EA/WR). These modes of variability have been evaluated both spatially, by analyzing the intensity and location of their anomaly centres, as well as temporally, by focusing on the probability density functions and e-folding time scales. The choice of variability modes as a tool for climate model assessment can be justified by the fact that modes of variability determine local climatic conditions and their likely change may have important implications for future climate changes. It is found that all the models considered are able to simulate reasonably well these four variability modes, the SCAND being the mode which is best spatially simulated. From a temporal point of view the NAO and SCAND modes are the best simulated. UKMO-HadGEM1 and CGCM3.1(T63) are the models best at reproducing spatial characteristics, whereas CCSM3 and CGCM3.1(T63) are the best ones with regard to the temporal features. GISS-AOM is the model showing the worst performance, in terms of both spatial and temporal features. These results may bring new insight into the selection and use of specific models to simulate Euro-Atlantic climate, with some models being clearly more successful in simulating patterns of temporal and spatial variability than others. (orig.)

3. A Parameterized Variable Dark Energy Model: Structure Formation and Observational Constraints

OpenAIRE

Bidgoli, Sepehr Arbabi; Movahed, M. Sadegh; Rahvar, Sohrab

2005-01-01

In this paper we investigate a simple parameterization scheme of the quintessence model given by Wetterich (2004). The crucial parameter of this model is the bending parameter \$b\$, which is related to the amount of dark energy in the early universe. Using the linear perturbation and the spherical infall approximations, we investigate the evolution of matter density perturbations in the variable dark energy model, and obtain an analytical expression for the growth index \$f\$. We show that incre...

4. Modelling accuracy and variability of motor timing in treated and untreated Parkinson’s disease and healthy controls

Directory of Open Access Journals (Sweden)

Catherine Rhian Gwyn Jones

2011-12-01

Full Text Available Parkinson’s disease (PD is characterised by difficulty with the timing of movements. Data collected using the synchronization-continuation paradigm, an established motor timing paradigm, have produced varying results but with most studies finding impairment. Some of this inconsistency comes from variation in the medication state tested, in the inter-stimulus intervals (ISI selected, and in changeable focus on either the synchronization (tapping in time with a tone or continuation (maintaining the rhythm in the absence of the tone phase. We sought to re-visit the paradigm by testing across four groups of participants: healthy controls, medication naïve de novo PD patients, and treated PD patients both ‘on’ and ‘off’ dopaminergic medication. Four finger tapping intervals (ISI were used: 250ms, 500ms, 1000ms and 2000ms. Categorical predictors (group, ISI, and phase were used to predict accuracy and variability using a linear mixed model. Accuracy was defined as the relative error of a tap, and variability as the deviation of the participant’s tap from group predicted relative error. Our primary finding is that the treated PD group (PD patients ‘on’ and ‘off’ dopaminergic therapy showed a significantly different pattern of accuracy compared to the de novo group and the healthy controls at the 250ms interval. At this interval, the treated PD patients performed ‘ahead’ of the beat whilst the other groups performed ‘behind’ the beat. We speculate that this ‘hastening’ relates to the clinical phenomenon of motor festination. Across all groups, variability was smallest for both phases at the 500 ms interval, suggesting an innate preference for finger tapping within this range. Tapping variability for the two phases became increasingly divergent at the longer intervals, with worse performance in the continuation phase. The data suggest that patients with PD can be best discriminated from healthy controls on measures of

5. Defined-Sector Explicit Solvent in Continuum Cluster Model for Computational Prediction of pKa: Consideration of Secondary Functionality and Higher Degree of Solvation.

Science.gov (United States)

Abramson, Rebecca A; Baldridge, Kim K

2013-02-12

Benchmark accuracy for prediction of first and second dissociation constants (pKa1 and pKa2 values) is realized with the recently developed Defined-Sector Explicit Solvent in Continuum Cluster Model. The model provides a systematic basis for inclusion of explicit solvation, essential for accurate prediction of dissociation constants using computational continuum model approaches. The DSES-CC model is demonstrated by considering the structure-to-chemical affinity relationship of the carboxyl functional group and is shown to provide predictability with mean absolute error of 0.5 pK units across a wide array of carboxylic acid functionality.

6. The use of ZIP and CART to model cryptosporidiosis in relation to climatic variables.

Science.gov (United States)

Hu, Wenbiao; Mengersen, Kerrie; Fu, Shiu-Yun; Tong, Shilu

2010-07-01

This research assesses the potential impact of weekly weather variability on the incidence of cryptosporidiosis disease using time series zero-inflated Poisson (ZIP) and classification and regression tree (CART) models. Data on weather variables, notified cryptosporidiosis cases and population size in Brisbane were supplied by the Australian Bureau of Meteorology, Queensland Department of Health, and Australian Bureau of Statistics, respectively. Both time series ZIP and CART models show a clear association between weather variables (maximum temperature, relative humidity, rainfall and wind speed) and cryptosporidiosis disease. The time series CART models indicated that, when weekly maximum temperature exceeded 31 degrees C and relative humidity was less than 63%, the relative risk of cryptosporidiosis rose by 13.64 (expected morbidity: 39.4; 95% confidence interval: 30.9-47.9). These findings may have applications as a decision support tool in planning disease control and risk-management programs for cryptosporidiosis disease.

7. Latent Variable Modelling and Item Response Theory Analyses in Marketing Research

Directory of Open Access Journals (Sweden)

Brzezińska Justyna

2016-12-01

Full Text Available Item Response Theory (IRT is a modern statistical method using latent variables designed to model the interaction between a subject’s ability and the item level stimuli (difficulty, guessing. Item responses are treated as the outcome (dependent variables, and the examinee’s ability and the items’ characteristics are the latent predictor (independent variables. IRT models the relationship between a respondent’s trait (ability, attitude and the pattern of item responses. Thus, the estimation of individual latent traits can differ even for two individuals with the same total scores. IRT scores can yield additional benefits and this will be discussed in detail. In this paper theory and application with R software with the use of packages designed for modelling IRT will be presented.

8. AMOC decadal variability in Earth system models: Mechanisms and climate impacts

Energy Technology Data Exchange (ETDEWEB)

Fedorov, Alexey [Yale Univ., New Haven, CT (United States)

2017-09-06

This is the final report for the project titled "AMOC decadal variability in Earth system models: Mechanisms and climate impacts". The central goal of this one-year research project was to understand the mechanisms of decadal and multi-decadal variability of the Atlantic Meridional Overturning Circulation (AMOC) within a hierarchy of climate models ranging from realistic ocean GCMs to Earth system models. The AMOC is a key element of ocean circulation responsible for oceanic transport of heat from low to high latitudes and controlling, to a large extent, climate variations in the North Atlantic. The questions of the AMOC stability, variability and predictability, directly relevant to the questions of climate predictability, were at the center of the research work.

9. Latent variable modelling of risk factors associated with childhood diseases: Case study for Nigeria

Directory of Open Access Journals (Sweden)

Khaled Khatab

2011-09-01

Full Text Available Objective: To investigate the impact of various bio-demographic and socio-economic variables on joint childhood diseases in Nigeria with flexible geoadditive probit models. Methods: Geoadditive latent variable model (LVM was applied where the three observable disease (diarrhea, cough, fever variables were modelled as indicators for the latent individual variable "health status" or "frailty" of a child. This modelling approach allowed us to investigate the common influence of risk factors on individual frailties of children, thereby automatically accounting for association between diseases as indicators for health status. The LVM extended to analyze the impact of risk factors and the spatial effects on the unobservable variable “health status ” of a child less than 5 years of age using the 2003 Demographic and Health Surveys (DHS data for Nigeria. Results: The results suggest some strong underlying spatial patterns of the three ailments with a clear southeastern divide of childhood morbidities and this might be the results in the overlapping of the various risk factors. Conclusions: Comorbidity with conditions such as cough, diarrhoea and fever is common in Nigeria. However, little is known about common risk factors and geographical overlaps in these illnesses. The search for overlapping common risk factors and their spatial effects may improve our understanding of the etiology of diseases for efficient and cost-effective control and planning of the three ailments.

10. An agent-based model of cellular dynamics and circadian variability in human endotoxemia.

Directory of Open Access Journals (Sweden)

Tung T Nguyen

Full Text Available As cellular variability and circadian rhythmicity play critical roles in immune and inflammatory responses, we present in this study an agent-based model of human endotoxemia to examine the interplay between circadian controls, cellular variability and stochastic dynamics of inflammatory cytokines. The model is qualitatively validated by its ability to reproduce circadian dynamics of inflammatory mediators and critical inflammatory responses after endotoxin administration in vivo. Novel computational concepts are proposed to characterize the cellular variability and synchronization of inflammatory cytokines in a population of heterogeneous leukocytes. Our results suggest that there is a decrease in cell-to-cell variability of inflammatory cytokines while their synchronization is increased after endotoxin challenge. Model parameters that are responsible for IκB production stimulated by NFκB activation and for the production of anti-inflammatory cytokines have large impacts on system behaviors. Additionally, examining time-dependent systemic responses revealed that the system is least vulnerable to endotoxin in the early morning and most vulnerable around midnight. Although much remains to be explored, proposed computational concepts and the model we have pioneered will provide important insights for future investigations and extensions, especially for single-cell studies to discover how cellular variability contributes to clinical implications.

11. Modelling the effects of environmental and individual variability when measuring the costs of first reproduction

Directory of Open Access Journals (Sweden)

Barbraud, C.

2004-06-01

Full Text Available How do animals balance their investment in young against their own chances to survive and reproduce in the future? This life–history trade–off, referred to as the cost of reproduction (Williams, 1966, holds a central place in life–history theory (Roff, 1992; Stearns, 1992; McNamara & Houston, 1996. Because individuals can only acquire a limited amount of energy, reproduction and survival as well as current and future reproduction are considered as functions competing for the same resources. In this framework, individuals may optimise life–history decisions. If the reproductive effort in one year leads to a loss in future reproductive output through decreased adult survival or reduced fecundity, then the optimal effort in the current season is less than the effort that would maximize the number of offspring produced in that season (Charnov & Krebs, 1974. There are at least two kinds of factors likely to confound the measurement of the costs of reproduction in the wild. First, there could be differences in the amount of energy individuals acquire and allocate to various functions. This phenotypic heterogeneity can mask or exacerbate individual allocation patterns when trends are averaged across a population (Vaupel & Yashin, 1985; McDonald et al., 1996; Cam & Monnat, 2000. Second, there could be variations in resource availability affecting energy acquisition and allocation. Theoretical models examining the optimal phenotypic balance between reproduction and survival under variable breeding conditions have investigated the influence of environmental stochasticity on the cost of reproduction in birds (Erikstad et al., 1998; Orzack & Tuljapurkar, 2001. However, there is little empirical evidence supporting these theoretical models. Here, we present analysis of the influence of experience, but also of the differential effects of environmental and individual variation on survival and future breeding probability. We address the question of the

12. Statistical methodology for discrete fracture model - including fracture size, orientation uncertainty together with intensity uncertainty and variability

International Nuclear Information System (INIS)

Darcel, C.; Davy, P.; Le Goc, R.; Dreuzy, J.R. de; Bour, O.

2009-11-01

the other, addresses the issue of the nature of the transition. We develop a new 'mechanistic' model that could help in modeling why and where this transition can occur. The transition between both regimes would occur for a fracture length of 1-10 m and even at a smaller scale for the few outcrops that follow the self-similar density model. A consequence for the disposal issue is that the model that is likely to apply in the 'blind' scale window between 10-100 m is the self-similar model as it is defined for large-scale lineaments. The self-similar model, as it is measured for some outcrops and most lineament maps, is definitely worth being investigated as a reference for scales above 1-10 m. In the rest of the report, we develop a methodology for incorporating uncertainty and variability into the DFN modeling. Fracturing properties arise from complex processes which produce an intrinsic variability; characterizing this variability as an admissible variation of model parameter or as the division of the site into subdomains with distinct DFN models is a critical point of the modeling effort. Moreover, the DFN model encompasses a part of uncertainty, due to data inherent uncertainties and sampling limits. Both effects must be quantified and incorporated into the DFN site model definition process. In that context, all available borehole data including recording of fracture intercept positions, pole orientation and relative uncertainties are used as the basis for the methodological development and further site model assessment. An elementary dataset contains a set of discrete fracture intercepts from which a parent orientation/density distribution can be computed. The elementary bricks of the site, from which these initial parent density distributions are computed, rely on the former Single Hole Interpretation division of the boreholes into sections whose local boundaries are expected to reflect - locally - geology and fracturing properties main characteristics. From that

13. A model-based study of ice and freshwater transport variability along both sides of Greenland

Energy Technology Data Exchange (ETDEWEB)

Lique, Camille; Treguier, Anne Marie [CNRS-Ifremer-UBO-IRD, Laboratoire de Physique des Oceans, Brest (France); Scheinert, Markus [IFM-GEOMAR, Leibniz-Institut fuer Meereswissenshaften, Kiel (Germany); Penduff, Thierry [Laboratoire des Ecoulements Geophysiques et Industriels, Grenoble (France); The Florida State University, Department of Oceanography, Tallahassee, FL (United States)

2009-10-15

We investigate some aspects of the variability of the Arctic freshwater content during the 1965-2002 period using the DRAKKAR eddy admitting global ocean/sea-ice model (12 km resolution in the Arctic). A comparison with recent mooring sections shows that the model realistically represents the major advective exchanges with the Arctic basin, through Bering, Fram and Davis Straits, and the Barents Sea. This allows the separate contributions of the inflows and outflows across each section to be quantified. In the model, the Arctic freshwater content variability is explained by the sea-ice flux at Fram and the combined variations of ocean freshwater inflow (at Bering) and outflow (at Fram and Davis). At all routes, except trough Fram Strait, the freshwater transport variability is mainly accounted for by the liquid component, with small contributions from the sea-ice flux. The ocean freshwater transport variability through both Davis and Fram is controlled by the variability of the export branch (Baffin Island Current and East Greenland Current, respectively), the variability of the inflow branches playing a minor role. We examine the respective role of velocity and salinity fluctuations in the variability of the ocean freshwater transport. Fram and Davis Straits offer a striking contrast in this regard. Freshwater transport variations across Davis Strait are completely determined by the variations of the total volume flux (0.91 correlation). On the other hand, the freshwater transport through Fram Strait depends both on variations of volume transport and salinity. As a result, there is no significant correlation between the variability of freshwater flux at Fram and Davis, although the volume transports on each side of Greenland are strongly anti-correlated (-0.84). Contrary to Davis Strait, the salinity of water carried by the East Greenland Current through Fram Strait varies strongly due to the ice-ocean flux north of Greenland. (orig.)

14. Multiple imputation with non-additively related variables: Joint-modeling and approximations.

Science.gov (United States)

Kim, Soeun; Belin, Thomas R; Sugar, Catherine A

2016-09-19

This paper investigates multiple imputation methods for regression models with interacting continuous and binary predictors when continuous variable may be missing. Usual implementations for parametric multiple imputation assume a multivariate normal structure for the variables, which is not satisfied for a binary variable nor its interaction with a continuous variable. To accommodate interactions, missing covariates are multiply imputed from conditional distribution in a manner consistent with the joint model. Alternative imputation methods under multivariate normal assumptions are also considered as candidate approximations and evaluated in a simulation study. The results suggest that the joint modeling procedure performs generally well across a wide range of scenarios and so do the approximation methods that incorporate interactions in the model appropriately by stratification. It is critical to include interactions in the imputation model as failure to do so may result in low coverage and bias. We apply the joint modeling approach and approximation methods in the study of childhood trauma with gender × trauma interaction. © The Author(s) 2016.

15. Degree of multicollinearity and variables involved in linear dependence in additive-dominant models

Directory of Open Access Journals (Sweden)

Juliana Petrini

2012-12-01

Full Text Available The objective of this work was to assess the degree of multicollinearity and to identify the variables involved in linear dependence relations in additive-dominant models. Data of birth weight (n=141,567, yearling weight (n=58,124, and scrotal circumference (n=20,371 of Montana Tropical composite cattle were used. Diagnosis of multicollinearity was based on the variance inflation factor (VIF and on the evaluation of the condition indexes and eigenvalues from the correlation matrix among explanatory variables. The first model studied (RM included the fixed effect of dam age class at calving and the covariates associated to the direct and maternal additive and non-additive effects. The second model (R included all the effects of the RM model except the maternal additive effects. Multicollinearity was detected in both models for all traits considered, with VIF values of 1.03 - 70.20 for RM and 1.03 - 60.70 for R. Collinearity increased with the increase of variables in the model and the decrease in the number of observations, and it was classified as weak, with condition index values between 10.00 and 26.77. In general, the variables associated with additive and non-additive effects were involved in multicollinearity, partially due to the natural connection between these covariables as fractions of the biological types in breed composition.

16. Computer Modeling Reveals that Modifications of the Histone Tail Charges Define Salt-Dependent Interaction of the Nucleosome Core Particles

OpenAIRE

Yang, Ye; Lyubartsev, Alexander P.; Korolev, Nikolay; Nordenskiöld, Lars

2009-01-01

Coarse-grained Langevin molecular dynamics computer simulations were conducted for systems that mimic solutions of nucleosome core particles (NCPs). The NCP was modeled as a negatively charged spherical particle representing the complex of DNA and the globular part of the histones combined with attached strings of connected charged beads modeling the histone tails. The size, charge, and distribution of the tails relative to the core were built to match real NCPs. Three models of NCPs were con...

17. Quantifying the Model-Related Variability of Biomass Stock and Change Estimates in the Norwegian National Forest Inventory

Science.gov (United States)

Johannes Breidenbach; Clara Antón-Fernández; Hans Petersson; Ronald E. McRoberts; Rasmus Astrup

2014-01-01

National Forest Inventories (NFIs) provide estimates of forest parameters for national and regional scales. Many key variables of interest, such as biomass and timber volume, cannot be measured directly in the field. Instead, models are used to predict those variables from measurements of other field variables. Therefore, the uncertainty or variability of NFI estimates...

18. Implementation of variable segments to model the arterial system using electromechanical analogy

Science.gov (United States)

Borik, Stefan; Cap, Ivo; Babusiak, Branko; Capova, Klara

2017-05-01

The article deals with the design of an electrical model of variable segments of a non-symmetrical tree of small arteries This model can be used to simulate the blood pressure and flow. Peripheral arterial resistance changes are modelled by an exponentially dependent resistor. By modulating the capacitor value, we can model the arterial wall properties which depend on the arterial pressure. Simulations are performed in which vasoconstriction and vasodilation were modelled by varying the transmural pressure. As a result, we can observe the changes in the blood pressure for each arterial generation.

19. Analytical and Numerical solutions of a nonlinear alcoholism model via variable-order fractional differential equations

Science.gov (United States)

Gómez-Aguilar, J. F.

2018-03-01

In this paper, we analyze an alcoholism model which involves the impact of Twitter via Liouville-Caputo and Atangana-Baleanu-Caputo fractional derivatives with constant- and variable-order. Two fractional mathematical models are considered, with and without delay. Special solutions using an iterative scheme via Laplace and Sumudu transform were obtained. We studied the uniqueness and existence of the solutions employing the fixed point postulate. The generalized model with variable-order was solved numerically via the Adams method and the Adams-Bashforth-Moulton scheme. Stability and convergence of the numerical solutions were presented in details. Numerical examples of the approximate solutions are provided to show that the numerical methods are computationally efficient. Therefore, by including both the fractional derivatives and finite time delays in the alcoholism model studied, we believe that we have established a more complete and more realistic indicator of alcoholism model and affect the spread of the drinking.

20. Electromagnetic interference modeling and suppression techniques in variable-frequency drive systems

Science.gov (United States)

Yang, Le; Wang, Shuo; Feng, Jianghua

2017-11-01

Electromagnetic interference (EMI) causes electromechanical damage to the motors and degrades the reliability of variable-frequency drive (VFD) systems. Unlike fundamental frequency components in motor drive systems, high-frequency EMI noise, coupled with the parasitic parameters of the trough system, are difficult to analyze and reduce. In this article, EMI modeling techniques for different function units in a VFD system, including induction motors, motor bearings, and rectifierinverters, are reviewed and evaluated in terms of applied frequency range, model parameterization, and model accuracy. The EMI models for the motors are categorized based on modeling techniques and model topologies. Motor bearing and shaft models are also reviewed, and techniques that are used to eliminate bearing current are evaluated. Modeling techniques for conventional rectifierinverter systems are also summarized. EMI noise suppression techniques, including passive filter, Wheatstone bridge balance, active filter, and optimized modulation, are reviewed and compared based on the VFD system models.

1. Incorporating variability in simulations of seasonally forced phenology using integral projection models.

Science.gov (United States)

Goodsman, Devin W; Aukema, Brian H; McDowell, Nate G; Middleton, Richard S; Xu, Chonggang

2018-01-01

Phenology models are becoming increasingly important tools to accurately predict how climate change will impact the life histories of organisms. We propose a class of integral projection phenology models derived from stochastic individual-based models of insect development and demography. Our derivation, which is based on the rate summation concept, produces integral projection models that capture the effect of phenotypic rate variability on insect phenology, but which are typically more computationally frugal than equivalent individual-based phenology models. We demonstrate our approach using a temperature-dependent model of the demography of the mountain pine beetle ( Dendroctonus ponderosae Hopkins), an insect that kills mature pine trees. This work illustrates how a wide range of stochastic phenology models can be reformulated as integral projection models. Due to their computational efficiency, these integral projection models are suitable for deployment in large-scale simulations, such as studies of altered pest distributions under climate change.

2. Multilayer Finite-Element Model Application to Define the Bearing Structure Element Stress State of Launch Complexes

Directory of Open Access Journals (Sweden)

V. A. Zverev

2016-01-01

Full Text Available The article objective is to justify the rationale for selecting the multilayer finite element model parameters of the bearing structure of a general-purpose launch complex unit.A typical design element of the launch complex unit, i.e. a mount of the hydraulic or pneumatic cylinder, block, etc. is under consideration. The mount represents a set of the cantilevered axis and external structural cage. The most loaded element of the cage is disk to which a moment is transferred from the cantilevered axis due to actuator effort acting on it.To calculate the stress-strain state of disk was used a finite element method. Five models of disk mount were created. The only difference in models was the number of layers of the finite elements through the thickness of disk. There were models, which had one, three, five, eight, and fourteen layers of finite elements through the thickness of disk. For each model, we calculated the equivalent stresses arising from the action of the test load. Disk models were formed and calculated using the MSC Nastran complex software.The article presents results in the table to show data of equivalent stresses in each of the multi-layered models and graphically to illustrate the changing equivalent stresses through the thickness of disk.Based on these results we have given advice on selecting the proper number of layers in the model allowing a desirable accuracy of results with the lowest run time. In addition, it is concluded that there is a need to use the multi-layer models in assessing the performance of structural elements in case the stress exceeds the allowable one in their surface layers.

3. Spatiotemporal Variability of Lake Water Quality in the Context of Remote Sensing Models

Directory of Open Access Journals (Sweden)

Carly Hyatt Hansen

2017-04-01

Full Text Available This study demonstrates a number of methods for using field sampling and observed lake characteristics and patterns to improve techniques for development of algae remote sensing models and applications. As satellite and airborne sensors improve and their data are more readily available, applications of models to estimate water quality via remote sensing are becoming more practical for local water quality monitoring, particularly of surface algal conditions. Despite the increasing number of applications, there are significant concerns associated with remote sensing model development and application, several of which are addressed in this study. These concerns include: (1 selecting sensors which are suitable for the spatial and temporal variability in the water body; (2 determining appropriate uses of near-coincident data in empirical model calibration; and (3 recognizing potential limitations of remote sensing measurements which are biased toward surface and near-surface conditions. We address these issues in three lakes in the Great Salt Lake surface water system (namely the Great Salt Lake, Farmington Bay, and Utah Lake through sampling at scales that are representative of commonly used sensors, repeated sampling, and sampling at both near-surface depths and throughout the water column. The variability across distances representative of the spatial resolutions of Landsat, SENTINEL-2 and MODIS sensors suggests that these sensors are appropriate for this lake system. We also use observed temporal variability in the system to evaluate sensors. These relationships proved to be complex, and observed temporal variability indicates the revisit time of Landsat may be problematic for detecting short events in some lakes, while it may be sufficient for other areas of the system with lower short-term variability. Temporal variability patterns in these lakes are also used to assess near-coincident data in empirical model development. Finally, relationships

4. Dominant Height Model for Site Classification of Eucalyptus grandis Incorporating Climatic Variables

Directory of Open Access Journals (Sweden)

José Roberto Soares Scolforo

2013-01-01

Full Text Available This study tested the effects of inserting climatic variables in Eucalyptus grandis as covariables of a dominant height model, which for site index classification is usually related to age alone. Dominant height values ranging from 1 to 12 years of age located in the Southeast region of Brazil were used, as well as data from 19 automatic meteorological stations from the area. The Chapman-Richards model was chosen to represent dominant height as a function of age. To include the environmental variables a modifier was included in the asymptote of the model. The asymptote was chosen since this parameter is responsible for the maximum value which the dominant height can reach. Of the four environmental variables most responsible for database variation, the two with the highest correlation to the mean annual increment in dominant height (mean monthly precipitation and temperature were selected to compose the asymptote modifier. Model validation showed a gain in precision of 33% (reduction of the standard error of estimate when climatic variables were inserted in the model. Possible applications of the method include the estimation of site capacity in regions lacking any planting history, as well as updating forest inventory data based on past climate regimes.

5. Development and evaluation of a stochastic daily rainfall model with long-term variability

Science.gov (United States)

Kamal Chowdhury, A. F. M.; Lockart, Natalie; Willgoose, Garry; Kuczera, George; Kiem, Anthony S.; Parana Manage, Nadeeka

2017-12-01

The primary objective of this study is to develop a stochastic rainfall generation model that can match not only the short resolution (daily) variability but also the longer resolution (monthly to multiyear) variability of observed rainfall. This study has developed a Markov chain (MC) model, which uses a two-state MC process with two parameters (wet-to-wet and dry-to-dry transition probabilities) to simulate rainfall occurrence and a gamma distribution with two parameters (mean and standard deviation of wet day rainfall) to simulate wet day rainfall depths. Starting with the traditional MC-gamma model with deterministic parameters, this study has developed and assessed four other variants of the MC-gamma model with different parameterisations. The key finding is that if the parameters of the gamma distribution are randomly sampled each year from fitted distributions rather than fixed parameters with time, the variability of rainfall depths at both short and longer temporal resolutions can be preserved, while the variability of wet periods (i.e. number of wet days and mean length of wet spell) can be preserved by decadally varied MC parameters. This is a straightforward enhancement to the traditional simplest MC model and is both objective and parsimonious.

6. Impact of variable seawater conductivity on motional induction simulated with an ocean general circulation model

Science.gov (United States)

Irrgang, C.; Saynisch, J.; Thomas, M.

2016-01-01

Carrying high concentrations of dissolved salt, ocean water is a good electrical conductor. As seawater flows through the Earth's ambient geomagnetic field, electric fields are generated, which in turn induce secondary magnetic fields. In current models for ocean-induced magnetic fields, a realistic consideration of seawater conductivity is often neglected and the effect on the variability of the ocean-induced magnetic field unknown. To model magnetic fields that are induced by non-tidal global ocean currents, an electromagnetic induction model is implemented into the Ocean Model for Circulation and Tides (OMCT). This provides the opportunity to not only model ocean-induced magnetic signals but also to assess the impact of oceanographic phenomena on the induction process. In this paper, the sensitivity of the induction process due to spatial and temporal variations in seawater conductivity is investigated. It is shown that assuming an ocean-wide uniform conductivity is insufficient to accurately capture the temporal variability of the magnetic signal. Using instead a realistic global seawater conductivity distribution increases the temporal variability of the magnetic field up to 45 %. Especially vertical gradients in seawater conductivity prove to be a key factor for the variability of the ocean-induced magnetic field. However, temporal variations of seawater conductivity only marginally affect the magnetic signal.

7. A Variable Stiffness Analysis Model for Large Complex Thin-Walled Guide Rail

Directory of Open Access Journals (Sweden)

Wang Xiaolong

2016-01-01

Full Text Available Large complex thin-walled guide rail has complicated structure and no uniform low rigidity. The traditional cutting simulations are time consuming due to huge computation especially in large workpiece. To solve these problems, a more efficient variable stiffness analysis model has been propose, which can obtain quantitative stiffness value of the machining surface. Applying simulate cutting force in sampling points using finite element analysis software ABAQUS, the single direction variable stiffness rule can be obtained. The variable stiffness matrix has been propose by analyzing multi-directions coupling variable stiffness rule. Combining with the three direction cutting force value, the reasonability of existing processing parameters can be verified and the optimized cutting parameters can be designed.

8. Changes in Southern Hemisphere circulation variability in climate change modelling experiments

International Nuclear Information System (INIS)

Grainger, Simon; Frederiksen, Carsten; Zheng, Xiaogu

2007-01-01

Full text: The seasonal mean of a climate variable can be considered as a statistical random variable, consisting of a signal and noise components (Madden 1976). The noise component consists of internal intraseasonal variability, and is not predictable on time-scales of a season or more ahead. The signal consists of slowly varying external and internal variability, and is potentially predictable on seasonal time-scales. The method of Zheng and Frederiksen (2004) has been applied to monthly time series of 500hPa Geopotential height from models submitted to the Coupled Model Intercomparison Project (CMIP3) experiment to obtain covariance matrices of the intraseasonal and slow components of covariability for summer and winter. The Empirical Orthogonal Functions (EOFs) of the intraseasonal and slow covariance matrices for the second half of the 20th century are compared with those observed by Frederiksen and Zheng (2007). The leading EOF in summer and winter for both the intraseasonal and slow components of covariability is the Southern Annular Mode (see, e.g. Kiladis and Mo 1998). This is generally reproduced by the CMIP3 models, although with different variance amounts. The observed secondary intraseasonal covariability modes of wave 4 patterns in summer and wave 3 or blocking in winter are also generally seen in the models, although the actual spatial pattern is different. For the slow covariabilty, the models are less successful in reproducing the two observed ENSO modes, with generally only one of them being represented among the leading EOFs. However, most models reproduce the observed South Pacific wave pattern. The intraseasonal and slow covariances matrices of 500hPa geopotential height under three climate change scenarios are also analysed and compared with those found for the second half of the 20th century. Through aggregating the results from a number of CMIP3 models, a consensus estimate of the changes in Southern Hemisphere variability, and their

9. Assessment of published models and prognostic variables in epithelial ovarian cancer at Mayo Clinic.

Science.gov (United States)

Wahner Hendrickson, Andrea E; Hawthorne, Kieran M; Goode, Ellen L; Kalli, Kimberly R; Goergen, Krista M; Bakkum-Gamez, Jamie N; Cliby, William A; Keeney, Gary L; Visscher, Daniel W; Tarabishy, Yaman; Oberg, Ann L; Hartmann, Lynn C; Maurer, Matthew J

2015-04-01

Epithelial ovarian cancer (EOC) is an aggressive disease in which first line therapy consists of a surgical staging/debulking procedure and platinum based chemotherapy. There is significant interest in clinically applicable, easy to use prognostic tools to estimate risk of recurrence and overall survival. In this study we used a large prospectively collected cohort of women with EOC to validate currently published models and assess prognostic variables. Women with invasive ovarian, peritoneal, or fallopian tube cancer diagnosed between 2000 and 2011 and prospectively enrolled into the Mayo Clinic Ovarian Cancer registry were identified. Demographics and known prognostic markers as well as epidemiologic exposure variables were abstracted from the medical record and collected via questionnaire. Six previously published models of overall and recurrence-free survival were assessed for external validity. In addition, predictors of outcome were assessed in our dataset. Previously published models validated with a range of c-statistics (0.587-0.827), though application of models containing variables which are not part of routine practice were somewhat limited by missing data; utilization of all applicable models and comparison of results are suggested. Examination of prognostic variables identified only the presence of ascites and ASA score to be independent predictors of prognosis in our dataset, albeit with marginal gain in prognostic information, after accounting for stage and debulking. Existing prognostic models for newly diagnosed EOC showed acceptable calibration in our cohort for clinical application. However, modeling of prospective variables in our dataset reiterates that stage and debulking remains the most important predictors of prognosis in this setting. Copyright © 2015 Elsevier Inc. All rights reserved.

10. Correlation Analysis of Water Demand and Predictive Variables for Short-Term Forecasting Models

Directory of Open Access Journals (Sweden)

B. M. Brentan

2017-01-01

Full Text Available Operational and economic aspects of water distribution make water demand forecasting paramount for water distribution systems (WDSs management. However, water demand introduces high levels of uncertainty in WDS hydraulic models. As a result, there is growing interest in developing accurate methodologies for water demand forecasting. Several mathematical models can serve this purpose. One crucial aspect is the use of suitable predictive variables. The most used predictive variables involve weather and social aspects. To improve the interrelation knowledge between water demand and various predictive variables, this study applies three algorithms, namely, classical Principal Component Analysis (PCA and machine learning powerful algorithms such as Self-Organizing Maps (SOMs and Random Forest (RF. We show that these last algorithms help corroborate the results found by PCA, while they are able to unveil hidden features for PCA, due to their ability to cope with nonlinearities. This paper presents a correlation study of three district metered areas (DMAs from Franca, a Brazilian city, exploring weather and social variables to improve the knowledge of residential demand for water. For the three DMAs, temperature, relative humidity, and hour of the day appear to be the most important predictive variables to build an accurate regression model.

11. Bayesian model accounting for within-class biological variability in Serial Analysis of Gene Expression (SAGE

Directory of Open Access Journals (Sweden)

Brentani Helena

2004-08-01

Full Text Available Abstract Background An important challenge for transcript counting methods such as Serial Analysis of Gene Expression (SAGE, "Digital Northern" or Massively Parallel Signature Sequencing (MPSS, is to carry out statistical analyses that account for the within-class variability, i.e., variability due to the intrinsic biological differences among sampled individuals of the same class, and not only variability due to technical sampling error. Results We introduce a Bayesian model that accounts for the within-class variability by means of mixture distribution. We show that the previously available approaches of aggregation in pools ("pseudo-libraries" and the Beta-Binomial model, are particular cases of the mixture model. We illustrate our method with a brain tumor vs. normal comparison using SAGE data from public databases. We show examples of tags regarded as differentially expressed with high significance if the within-class variability is ignored, but clearly not so significant if one accounts for it. Conclusion Using available information about biological replicates, one can transform a list of candidate transcripts showing differential expression to a more reliable one. Our method is freely available, under GPL/GNU copyleft, through a user friendly web-based on-line tool or as R language scripts at supplemental web-site.

12. Bayesian model accounting for within-class biological variability in Serial Analysis of Gene Expression (SAGE).

Science.gov (United States)

Vêncio, Ricardo Z N; Brentani, Helena; Patrão, Diogo F C; Pereira, Carlos A B

2004-08-31

An important challenge for transcript counting methods such as Serial Analysis of Gene Expression (SAGE), "Digital Northern" or Massively Parallel Signature Sequencing (MPSS), is to carry out statistical analyses that account for the within-class variability, i.e., variability due to the intrinsic biological differences among sampled individuals of the same class, and not only variability due to technical sampling error. We introduce a Bayesian model that accounts for the within-class variability by means of mixture distribution. We show that the previously available approaches of aggregation in pools ("pseudo-libraries") and the Beta-Binomial model, are particular cases of the mixture model. We illustrate our method with a brain tumor vs. normal comparison using SAGE data from public databases. We show examples of tags regarded as differentially expressed with high significance if the within-class variability is ignored, but clearly not so significant if one accounts for it. Using available information about biological replicates, one can transform a list of candidate transcripts showing differential expression to a more reliable one. Our method is freely available, under GPL/GNU copyleft, through a user friendly web-based on-line tool or as R language scripts at supplemental web-site.

13. Polychotomization of continuous variables in regression models based on the overall C index

Directory of Open Access Journals (Sweden)

Bax Leon

2006-12-01

Full Text Available Abstract Background When developing multivariable regression models for diagnosis or prognosis, continuous independent variables can be categorized to make a prediction table instead of a prediction formula. Although many methods have been proposed to dichotomize prognostic variables, to date there has been no integrated method for polychotomization. The latter is necessary when dichotomization results in too much loss of information or when central values refer to normal states and more dispersed values refer to less preferable states, a situation that is not unusual in medical settings (e.g. body temperature, blood pressure. The goal of our study was to develop a theoretical and practical method for polychotomization. Methods We used the overall discrimination index C, introduced by Harrel, as a measure of the predictive ability of an independent regressor variable and derived a method for polychotomization mathematically. Since the naïve application of our method, like some existing methods, gives rise to positive bias, we developed a parametric method that minimizes this bias and assessed its performance by the use of Monte Carlo simulation. Results The overall C is closely related to the area under the ROC curve and the produced di(polychotomized variable's predictive performance is comparable to the original continuous variable. The simulation shows that the parametric method is essentially unbiased for both the estimates of performance and the cutoff points. Application of our method to the predictor variables of a previous study on rhabdomyolysis shows that it can be used to make probability profile tables that are applicable to the diagnosis or prognosis of individual patient status. Conclusion We propose a polychotomization (including dichotomization method for independent continuous variables in regression models based on the overall discrimination index C and clarified its meaning mathematically. To avoid positive bias in

14. Rapid Estimation Method for State of Charge of Lithium-Ion Battery Based on Fractional Continual Variable Order Model

Directory of Open Access Journals (Sweden)

Xin Lu

2018-03-01

Full Text Available In recent years, the fractional order model has been employed to state of charge (SOC estimation. The non integer differentiation order being expressed as a function of recursive factors defining the fractality of charge distribution on porous electrodes. The battery SOC affects the fractal dimension of charge distribution, therefore the order of the fractional order model varies with the SOC at the same condition. This paper proposes a new method to estimate the SOC. A fractional continuous variable order model is used to characterize the fractal morphology of charge distribution. The order identification results showed that there is a stable monotonic relationship between the fractional order and the SOC after the battery inner electrochemical reaction reaches balanced. This feature makes the proposed model particularly suitable for SOC estimation when the battery is in the resting state. Moreover, a fast iterative method based on the proposed model is introduced for SOC estimation. The experimental results showed that the proposed iterative method can quickly estimate the SOC by several iterations while maintaining high estimation accuracy.

15. Computation of geographic variables for air pollution prediction models in South Korea.

Science.gov (United States)

Eum, Youngseob; Song, Insang; Kim, Hwan-Cheol; Leem, Jong-Han; Kim, Sun-Young

2015-01-01

Recent cohort studies have relied on exposure prediction models to estimate individuallevel air pollution concentrations because individual air pollution measurements are not available for cohort locations. For such prediction models, geographic variables related to pollution sources are important inputs. We demonstrated the computation process of geographic variables mostly recorded in 2010 at regulatory air pollution monitoring sites in South Korea. On the basis of previous studies, we finalized a list of 313 geographic variables related to air pollution sources in eight categories including traffic, demographic characteristics, land use, transportation facilities, physical geography, emissions, vegetation, and altitude. We then obtained data from different sources such as the Statistics Geographic Information Service and Korean Transport Database. After integrating all available data to a single database by matching coordinate systems and converting non-spatial data to spatial data, we computed geographic variables at 294 regulatory monitoring sites in South Korea. The data integration and variable computation were performed by using ArcGIS version 10.2 (ESRI Inc., Redlands, CA, USA). For traffic, we computed the distances to the nearest roads and the sums of road lengths within different sizes of circular buffers. In addition, we calculated the numbers of residents, households, housing buildings, companies, and employees within the buffers. The percentages of areas for different types of land use compared to total areas were calculated within the buffers. For transportation facilities and physical geography, we computed the distances to the closest public transportation depots and the boundary lines. The vegetation index and altitude were estimated at a given location by using satellite data. The summary statistics of geographic variables in Seoul across monitoring sites showed different patterns between urban background and urban roadside sites. This study

16. Computation of geographic variables for air pollution prediction models in South Korea

Directory of Open Access Journals (Sweden)

Youngseob Eum

2015-10-01

Full Text Available Recent cohort studies have relied on exposure prediction models to estimate individuallevel air pollution concentrations because individual air pollution measurements are not available for cohort locations. For such prediction models, geographic variables related to pollution sources are important inputs. We demonstrated the computation process of geographic variables mostly recorded in 2010 at regulatory air pollution monitoring sites in South Korea. On the basis of previous studies, we finalized a list of 313 geographic variables related to air pollution sources in eight categories including traffic, demographic characteristics, land use, transportation facilities, physical geography, emissions, vegetation, and altitude. We then obtained data from different sources such as the Statistics Geographic Information Service and Korean Transport Database. After integrating all available data to a single database by matching coordinate systems and converting non-spatial data to spatial data, we computed geographic variables at 294 regulatory monitoring sites in South Korea. The data integration and variable computation were performed by using ArcGIS version 10.2 (ESRI Inc., Redlands, CA, USA. For traffic, we computed the distances to the nearest roads and the sums of road lengths within different sizes of circular buffers. In addition, we calculated the numbers of residents, households, housing buildings, companies, and employees within the buffers. The percentages of areas for different types of land use compared to total areas were calculated within the buffers. For transportation facilities and physical geography, we computed the distances to the closest public transportation depots and the boundary lines. The vegetation index and altitude were estimated at a given location by using satellite data. The summary statistics of geographic variables in Seoul across monitoring sites showed different patterns between urban background and urban roadside

17. Variable-density numerical modeling of seawater intrusion in coastal aquifer with well-developed conduits

Science.gov (United States)

Xu, Z.; Hu, B. X.

2015-12-01

Karst aquifer is an important drinking water supply for nearly 25% of the world's population. Well-developed subground conduit systems usually can be found in a well-developed karst aquifer, as a dual permeability system. Hydraulic characteristics of non-laminar flow in conduits could be significantly different from darcian flow in porous medium; therefore, hybrid model and different governing equations are necessary in numerical modeling of karst hydrogeology. On the other hand, seawater intrusion has been observed and studied for several decades, also become a worldwidely problem due to groundwater over-pumping and rising sea level. The density difference between freshwater and seawater is recognized as the major factor governing the movements of two fluids in coastal aquifer. Several models have been developed to simulate groundwater flow in karst aquifer, but hardly describe seawater intrusion through the conduits without coupling variable density flow and solute transport. In this study, a numerical SEAWAT model has been developed to simulate variable density flow and transport in heterogeneous karst aquifer. High-density seawater is verified to intrude further inland through high permeability conduit network rather than porous medium. The numerical model also predicts the effect of different cases on seawater intrusion in coastal karst aquifer, such as rising sea level, tide stages and freshwater discharge effects. A series of local and global uncertainty analysis have been taken to evaluate the sensitivity of hydraulic conductivity, porosity, groundwater pumping, sea level, salinity and dispersivity. Heterogeneous conduit and porous medium hydraulic characteristics play an important role in groundwater flow and solute transport simulation. Meanwhile, another hybrid model VDFST-CFP model is currently under development to couple turbulent conduit flow and variable density groundwater flow in porous media, which provides a new method and better description in

18. Statistical modeling methods to analyze the impacts of multiunit process variability on critical quality attributes of Chinese herbal medicine tablets

Directory of Open Access Journals (Sweden)

Sun F

2016-11-01

Full Text Available Fei Sun,1 Bing Xu,1,2 Yi Zhang,1 Shengyun Dai,1 Chan Yang,1 Xianglong Cui,1 Xinyuan Shi,1,2 Yanjiang Qiao1,2 1Research Center of Traditional Chinese Medicine Information Engineering, School of Chinese Materia Medica, Beijing University of Chinese Medicine, 2Key Laboratory of Manufacture Process Control and Quality Evaluation of Chinese Medicine, Beijing, People’s Republic of China Abstract: The quality of Chinese herbal medicine tablets suffers from batch-to-batch variability due to a lack of manufacturing process understanding. In this paper, the Panax notoginseng saponins (PNS immediate release tablet was taken as the research subject. By defining the dissolution of five active pharmaceutical ingredients and the tablet tensile strength as critical quality attributes (CQAs, influences of both the manipulated process parameters introduced by an orthogonal experiment design and the intermediate granules’ properties on the CQAs were fully investigated by different chemometric methods, such as the partial least squares, the orthogonal projection to latent structures, and the multiblock partial least squares (MBPLS. By analyzing the loadings plots and variable importance in the projection indexes, the granule particle sizes and the minimal punch tip separation distance in tableting were identified as critical process parameters. Additionally, the MBPLS model suggested that the lubrication time in the final blending was also important in predicting tablet quality attributes. From the calculated block importance in the projection indexes, the tableting unit was confirmed to be the critical process unit of the manufacturing line. The results demonstrated that the combinatorial use of different multivariate modeling methods could help in understanding the complex process relationships as a whole. The output of this study can then be used to define a control strategy to improve the quality of the PNS immediate release tablet. Keywords: Panax

19. Modelling and control of variable speed wind turbines for power system studies

DEFF Research Database (Denmark)

Michalke, Gabriele; Hansen, Anca Daniela

2010-01-01

and implemented in the power system simulation tool DIgSILENT. Important issues like the fault ride-through and grid support capabilities of these wind turbine concepts are addressed. The paper reveals that advanced control of variable speed wind turbines can improve power system stability. Finally......, it will be shown in the paper that wind parks consisting of variable speed wind turbines can help nearby connected fixed speed wind turbines to ride-through grid faults. Copyright © 2009 John Wiley & Sons, Ltd.......Modern wind turbines are predominantly variable speed wind turbines with power electronic interface. Emphasis in this paper is therefore on the modelling and control issues of these wind turbine concepts and especially on their impact on the power system. The models and control are developed...

20. Uncertainty and variability in computational and mathematical models of cardiac physiology.

Science.gov (United States)

Mirams, Gary R; Pathmanathan, Pras; Gray, Richard A; Challenor, Peter; Clayton, Richard H

2016-12-01

Mathematical and computational models of cardiac physiology have been an integral component of cardiac electrophysiology since its inception, and are collectively known as the Cardiac Physiome. We identify and classify the numerous sources of variability and uncertainty in model formulation, parameters and other inputs that arise from both natural variation in experimental data and lack of knowledge. The impact of uncertainty on the outputs of Cardiac Physiome models is not well understood, and this limits their utility as clinical tools. We argue that incorporating variability and uncertainty should be a high priority for the future of the Cardiac Physiome. We suggest investigating the adoption of approaches developed in other areas of science and engineering while recognising unique challenges for the Cardiac Physiome; it is likely that novel methods will be necessary that require engagement with the mathematics and statistics community. The Cardiac Physiome effort is one of the most mature and successful applications of mathematical and computational modelling for describing and advancing the understanding of physiology. After five decades of development, physiological cardiac models are poised to realise the promise of translational research via clinical applications such as drug development and patient-specific approaches as well as ablation, cardiac resynchronisation and contractility modulation therapies. For models to be included as a vital component of the decision process in safety-critical applications, rigorous assessment of model credibility will be required. This White Paper describes one aspect of this process by identifying and classifying sources of variability and uncertainty in models as well as their implications for the application and development of cardiac models. We stress the need to understand and quantify the sources of variability and uncertainty in model inputs, and the impact of model structure and complexity and their consequences for

1. Rainfall variability over southern Africa: an overview of current research using satellite and climate model data

Science.gov (United States)

Williams, C.; Kniveton, D.; Layberry, R.

2009-04-01

It is increasingly accepted that any possible climate change will not only have an influence on mean climate but may also significantly alter climatic variability. A change in the distribution and magnitude of extreme rainfall events (associated with changing variability), such as droughts or flooding, may have a far greater impact on human and natural systems than a changing mean. This issue is of particular importance for environmentally vulnerable regions such as southern Africa. The subcontinent is considered especially vulnerable to and ill-equipped (in terms of adaptation) for extreme events, due to a number of factors including extensive poverty, famine, disease and political instability. Rainfall variability is a function of scale, so high spatial and temporal resolution data are preferred to identify extreme events and accurately predict future variability. In this research, satellite-derived rainfall data are used as a basis for undertaking model experiments using a state-of-the-art climate model, run at both high and low spatial resolution. Once the model's ability to reproduce extremes has been assessed, idealised regions of sea surface temperature (SST) anomalies are used to force the model, with the overall aim of investigating the ways in which SST anomalies influence rainfall extremes over southern Africa. In this paper, a brief overview is given of the authors' research to date, pertaining to southern African rainfall. This covers (i) a description of present-day rainfall variability over southern Africa; (ii) a comparison of model simulated daily rainfall with the satellite-derived dataset; (iii) results from sensitivity testing of the model's domain size; and (iv) results from the idealised SST experiments.

2. Separation of variables in anisotropic models and non-skew-symmetric elliptic r-matrix

Science.gov (United States)

Skrypnyk, Taras

2017-05-01

We solve a problem of separation of variables for the classical integrable hamiltonian systems possessing Lax matrices satisfying linear Poisson brackets with the non-skew-symmetric, non-dynamical elliptic so(3)⊗ so(3)-valued classical r-matrix. Using the corresponding Lax matrices, we present a general form of the "separating functions" B( u) and A( u) that generate the coordinates and the momenta of separation for the associated models. We consider several examples and perform the separation of variables for the classical anisotropic Euler's top, Steklov-Lyapunov model of the motion of anisotropic rigid body in the liquid, two-spin generalized Gaudin model and "spin" generalization of Steklov-Lyapunov model.

3. Variable length and context-dependent HMM letter form models for Arabic handwritten word recognition

Science.gov (United States)

Bianne-Bernard, Anne-Laure; Menasri, Fares; Likforman-Sulem, Laurence; Mokbel, Chafic; Kermorvant, Christopher

2012-01-01

We present in this paper an HMM-based recognizer for the recognition of unconstrained Arabic handwritten words. The recognizer is a context-dependent HMM which considers variable topology and contextual information for a better modeling of writing units. We propose an algorithm to adapt the topology of each HMM to the character to be modeled. For modeling the contextual units, a state-tying process based on decision tree clustering is introduced which significantly reduces the number of parameters. Decision trees are built according to a set of expert-based questions on how characters are written. Questions are divided into global questions yielding larger clusters and precise questions yielding smaller ones. We apply this modeling to the recognition of Arabic handwritten words. Experiments conducted on the OpenHaRT2010 database show that variable length topology and contextual information significantly improves the recognition rate.

4. Influence of main variables modifications on accident transient based on AP1000-like MELCOR model

Science.gov (United States)

Malicki, M.; Pieńkowski, L.

2016-09-01

Analysis of Severe Accidents (SA) is one of the most important parts of nuclear safety researches. MELCOR is a validated system code for severe accident analysis and as such it was used to obtain presented results. Analysed AP1000 model is based on publicly available data only. Sensitivity analysis was done for the main variables of primary reactor coolant system to find their influence on accident transient. This kind of analysis helps to find weak points of reactor design and the model itself. Performed analysis is a base for creation of Small Modular Reactor (SMR) generic model which will be the next step of the investigation aiming to estimate safety level of different reactors. Results clearly help to establish a range of boundary conditions for main the variables in future SMR model.

5. An introduction to latent variable growth curve modeling concepts, issues, and application

CERN Document Server

Duncan, Terry E; Strycker, Lisa A

2013-01-01

This book provides a comprehensive introduction to latent variable growth curve modeling (LGM) for analyzing repeated measures. It presents the statistical basis for LGM and its various methodological extensions, including a number of practical examples of its use. It is designed to take advantage of the reader's familiarity with analysis of variance and structural equation modeling (SEM) in introducing LGM techniques. Sample data, syntax, input and output, are provided for EQS, Amos, LISREL, and Mplus on the book's CD. Throughout the book, the authors present a variety of LGM techniques that are useful for many different research designs, and numerous figures provide helpful diagrams of the examples.Updated throughout, the second edition features three new chapters-growth modeling with ordered categorical variables, growth mixture modeling, and pooled interrupted time series LGM approaches. Following a new organization, the book now covers the development of the LGM, followed by chapters on multiple-group is...

6. Joint Bayesian variable and graph selection for regression models with network-structured predictors

Science.gov (United States)

Peterson, C. B.; Stingo, F. C.; Vannucci, M.

2015-01-01

In this work, we develop a Bayesian approach to perform selection of predictors that are linked within a network. We achieve this by combining a sparse regression model relating the predictors to a response variable with a graphical model describing conditional dependencies among the predictors. The proposed method is well-suited for genomic applications since it allows the identification of pathways of functionally related genes or proteins which impact an outcome of interest. In contrast to previous approaches for network-guided variable selection, we infer the network among predictors using a Gaussian graphical model and do not assume that network information is available a priori. We demonstrate that our method outperforms existing methods in identifying network-structured predictors in simulation settings, and illustrate our proposed model with an application to inference of proteins relevant to glioblastoma survival. PMID:26514925

7. Identification of Dynamic Simulation Models for Variable Speed Pumped Storage Power Plants

Science.gov (United States)

Moreira, C.; Fulgêncio, N.; Silva, B.; Nicolet, C.; Béguin, A.

2017-04-01

This paper addresses the identification of reduced order models for variable speed pump-turbine plants, including the representation of the dynamic behaviour of the main components: hydraulic system, turbine governors, electromechanical equipment and power converters. A methodology for the identification of appropriated reduced order models both for turbine and pump operating modes is presented and discussed. The methodological approach consists of three main steps: 1) detailed pumped-storage power plant modelling in SIMSEN; 2) reduced order models identification and 3) specification of test conditions for performance evaluation.

8. Robust estimation of errors-in-variables models using M-estimators

Science.gov (United States)

Guo, Cuiping; Peng, Junhuan

2017-07-01

The traditional Errors-in-variables (EIV) models are widely adopted in applied sciences. The EIV model estimators, however, can be highly biased by gross error. This paper focuses on robust estimation in EIV models. A new class of robust estimators, called robust weighted total least squared estimators (RWTLS), is introduced. Robust estimators of the parameters of the EIV models are derived from M-estimators and Lagrange multiplier method. A simulated example is carried out to demonstrate the performance of the presented RWTLS. The result shows that the RWTLS algorithm can indeed resist gross error to achieve a reliable solution.

9. Uncertainty and Variability in Physiologically-Based Pharmacokinetic (PBPK) Models: Key Issues and Case Studies (Final Report)

Science.gov (United States)

EPA announced the availability of the final report, Uncertainty and Variability in Physiologically-Based Pharmacokinetic (PBPK) Models: Key Issues and Case Studies. This report summarizes some of the recent progress in characterizing uncertainty and variability in physi...

10. 20180311 - Variability of LD50 Values from Rat Oral Acute Toxicity Studies: Implications for Alternative Model Development (SOT)

Science.gov (United States)

Alternative models developed for estimating acute systemic toxicity are generally evaluated using in vivo LD50 values. However, in vivo acute systemic toxicity studies can produce variable results, even when conducted according to accepted test guidelines. This variability can ma...

11. Time-dependent excitation and ionization modelling of absorption-line variability due to GRB080310

DEFF Research Database (Denmark)

Vreeswijk, P.M.; De Cia, A.; Jakobsson, P.

2013-01-01

We model the time-variable absorption of Feii, Feiii, Siii, Cii and Crii detected in Ultraviolet and Visual Echelle Spectrograph (UVES) spectra of gamma-ray burst (GRB) 080310, with the afterglow radiation exciting and ionizing the interstellar medium in the host galaxy at a redshift of z = 2.427...

12. Global Convergence of the EM Algorithm for Unconstrained Latent Variable Models with Categorical Indicators

Science.gov (United States)

Weissman, Alexander

2013-01-01

Convergence of the expectation-maximization (EM) algorithm to a global optimum of the marginal log likelihood function for unconstrained latent variable models with categorical indicators is presented. The sufficient conditions under which global convergence of the EM algorithm is attainable are provided in an information-theoretic context by…

13. Prognostic modeling of oral cancer by gene profiles and clinicopathological co-variables

NARCIS (Netherlands)

Mes, Steven W.; te Beest, Dennis; Poli, Tito; Rossi, Silvia; Scheckenbach, Kathrin; Van Wieringen, Wessel N.; Brink, Arjen; Bertani, Nicoletta; Lanfranco, Davide; Silini, Enrico M.; van Diest, Paul J.; Bloemena, Elisabeth; René Leemans, C.; Van De Wiel, Mark A.; Brakenhoff, Ruud H

2017-01-01

Accurate staging and outcome prediction is a major problem in clinical management of oral cancer patients, hampering high precision treatment and adjuvant therapy planning. Here, we have built and validated multivariable models that integrate gene signatures with clinical and pathological variables

14. Bayesian modeling of measurement error in predictor variables using item response theory

NARCIS (Netherlands)

Fox, Gerardus J.A.; Glas, Cornelis A.W.

2000-01-01

This paper focuses on handling measurement error in predictor variables using item response theory (IRT). Measurement error is of great important in assessment of theoretical constructs, such as intelligence or the school climate. Measurement error is modeled by treating the predictors as unobserved

15. Natural conjugate priors for the instrumental variables regression model applied to the Angrist-Krueger data

NARCIS (Netherlands)

L.F. Hoogerheide (Lennart); F.R. Kleibergen (Frank); H.K. van Dijk (Herman)

2006-01-01

textabstractWe propose a natural conjugate prior for the instrumental variables regression model. The prior is a natural conjugate one since the marginal prior and posterior of the structural parameter have the same functional expressions which directly reveal the update from prior to posterior. The

16. A Mixed-Methodological Examination of Investment Model Variables among Abused and Nonabused College Women

Science.gov (United States)

Dardis, Christina M.; Kelley, Erika L.; Edwards, Katie M.; Gidycz, Christine A.

2013-01-01

Objective: This study assessed abused and nonabused women's perceptions of Investment Model (IM) variables (ie, relationship investment, satisfaction, commitment, quality of alternatives) utilizing a mixed-methods design. Participants: Participants included 102 college women, approximately half of whom were in abusive dating relationships.…

17. Semantic Model of Variability and Capabilities of IoT Applications for Embedded Software Ecosystems

DEFF Research Database (Denmark)

Tomlein, Matus; Grønbæk, Kaj

2016-01-01

Applications in embedded open software ecosystems for Internet of Things devices open new challenges regarding how their variability and capabilities should be modeled. In collaboration with an industrial partner, we have recognized that such applications have complex constraints on the context. We...

18. Estimating forest variables from top-of-atmosphere radiance satellite measurements using coupled radiative transfer models

NARCIS (Netherlands)

Laurent, V.C.E.; Verhoef, W.; Clevers, J.G.P.W.; Schaepman, M.E.

2011-01-01

Traditionally, it is necessary to pre-process remote sensing data to obtain top of canopy (TOC) reflectances before applying physically-based model inversion techniques to estimate forest variables. Corrections for atmospheric, adjacency, topography, and surface directional effects are applied

19. A New Integrated Weighted Model in SNOW-V10: Verification of Categorical Variables

Science.gov (United States)

Huang, Laura X.; Isaac, George A.; Sheng, Grant

2014-01-01

This paper presents the verification results for nowcasts of seven categorical variables from an integrated weighted model (INTW) and the underlying numerical weather prediction (NWP) models. Nowcasting, or short range forecasting (0-6 h), over complex terrain with sufficient accuracy is highly desirable but a very challenging task. A weighting, evaluation, bias correction and integration system (WEBIS) for generating nowcasts by integrating NWP forecasts and high frequency observations was used during the Vancouver 2010 Olympic and Paralympic Winter Games as part of the Science of Nowcasting Olympic Weather for Vancouver 2010 (SNOW-V10) project. Forecast data from Canadian high-resolution deterministic NWP system with three nested grids (at 15-, 2.5- and 1-km horizontal grid-spacing) were selected as background gridded data for generating the integrated nowcasts. Seven forecast variables of temperature, relative humidity, wind speed, wind gust, visibility, ceiling and precipitation rate are treated as categorical variables for verifying the integrated weighted forecasts. By analyzing the verification of forecasts from INTW and the NWP models among 15 sites, the integrated weighted model was found to produce more accurate forecasts for the 7 selected forecast variables, regardless of location. This is based on the multi-categorical Heidke skill scores for the test period 12 February to 21 March 2010.

20. Novel Modeling Tools for Propagating Climate Change Variability and Uncertainty into Hydrodynamic Forecasts

Science.gov (United States)

Understanding impacts of climate change on hydrodynamic processes and ecosystem response within the Great Lakes is an important and challenging task. Variability in future climate conditions, uncertainty in rainfall-runoff model forecasts, the potential for land use change, and t...