#### Sample records for model complex variable

1. Complex variables

CERN Document Server

Fisher, Stephen D

1999-01-01

The most important topics in the theory and application of complex variables receive a thorough, coherent treatment in this introductory text. Intended for undergraduates or graduate students in science, mathematics, and engineering, this volume features hundreds of solved examples, exercises, and applications designed to foster a complete understanding of complex variables as well as an appreciation of their mathematical beauty and elegance. Prerequisites are minimal; a three-semester course in calculus will suffice to prepare students for discussions of these topics: the complex plane, basic

2. Complex variables

CERN Document Server

Flanigan, Francis J

2010-01-01

A caution to mathematics professors: Complex Variables does not follow conventional outlines of course material. One reviewer noting its originality wrote: ""A standard text is often preferred [to a superior text like this] because the professor knows the order of topics and the problems, and doesn't really have to pay attention to the text. He can go to class without preparation."" Not so here-Dr. Flanigan treats this most important field of contemporary mathematics in a most unusual way. While all the material for an advanced undergraduate or first-year graduate course is covered, discussion

3. Coupled variable selection for regression modeling of complex treatment patterns in a clinical cancer registry.

Science.gov (United States)

Schmidtmann, I; Elsäßer, A; Weinmann, A; Binder, H

2014-12-30

For determining a manageable set of covariates potentially influential with respect to a time-to-event endpoint, Cox proportional hazards models can be combined with variable selection techniques, such as stepwise forward selection or backward elimination based on p-values, or regularized regression techniques such as component-wise boosting. Cox regression models have also been adapted for dealing with more complex event patterns, for example, for competing risks settings with separate, cause-specific hazard models for each event type, or for determining the prognostic effect pattern of a variable over different landmark times, with one conditional survival model for each landmark. Motivated by a clinical cancer registry application, where complex event patterns have to be dealt with and variable selection is needed at the same time, we propose a general approach for linking variable selection between several Cox models. Specifically, we combine score statistics for each covariate across models by Fisher's method as a basis for variable selection. This principle is implemented for a stepwise forward selection approach as well as for a regularized regression technique. In an application to data from hepatocellular carcinoma patients, the coupled stepwise approach is seen to facilitate joint interpretation of the different cause-specific Cox models. In conditional survival models at landmark times, which address updates of prediction as time progresses and both treatment and other potential explanatory variables may change, the coupled regularized regression approach identifies potentially important, stably selected covariates together with their effect time pattern, despite having only a small number of events. These results highlight the promise of the proposed approach for coupling variable selection between Cox models, which is particularly relevant for modeling for clinical cancer registries with their complex event patterns. Copyright © 2014 John Wiley & Sons

4. Several complex variables

International Nuclear Information System (INIS)

Field, M.J.

1976-01-01

Topics discussed include the elementary of holomorphic functions of several complex variables; the Weierstrass preparation theorem; meromorphic functions, holomorphic line bundles and divisors; elliptic operators on compact manifolds; hermitian connections; the Hodge decomposition theorem. ( author)

5. Variable speed limit strategies analysis with mesoscopic traffic flow model based on complex networks

Science.gov (United States)

Li, Shu-Bin; Cao, Dan-Ni; Dang, Wen-Xiu; Zhang, Lin

As a new cross-discipline, the complexity science has penetrated into every field of economy and society. With the arrival of big data, the research of the complexity science has reached its summit again. In recent years, it offers a new perspective for traffic control by using complex networks theory. The interaction course of various kinds of information in traffic system forms a huge complex system. A new mesoscopic traffic flow model is improved with variable speed limit (VSL), and the simulation process is designed, which is based on the complex networks theory combined with the proposed model. This paper studies effect of VSL on the dynamic traffic flow, and then analyzes the optimal control strategy of VSL in different network topologies. The conclusion of this research is meaningful to put forward some reasonable transportation plan and develop effective traffic management and control measures to help the department of traffic management.

6. On Complex Random Variables

Directory of Open Access Journals (Sweden)

Anwer Khurshid

2012-07-01

Full Text Available Normal 0 false false false EN-US X-NONE X-NONE In this paper, it is shown that a complex multivariate random variable  is a complex multivariate normal random variable of dimensionality if and only if all nondegenerate complex linear combinations of  have a complex univariate normal distribution. The characteristic function of  has been derived, and simpler forms of some theorems have been given using this characterization theorem without assuming that the variance-covariance matrix of the vector  is Hermitian positive definite. Marginal distributions of  have been given. In addition, a complex multivariate t-distribution has been defined and the density derived. A characterization of the complex multivariate t-distribution is given. A few possible uses of this distribution have been suggested.

7. A Variable Stiffness Analysis Model for Large Complex Thin-Walled Guide Rail

Directory of Open Access Journals (Sweden)

Wang Xiaolong

2016-01-01

Full Text Available Large complex thin-walled guide rail has complicated structure and no uniform low rigidity. The traditional cutting simulations are time consuming due to huge computation especially in large workpiece. To solve these problems, a more efficient variable stiffness analysis model has been propose, which can obtain quantitative stiffness value of the machining surface. Applying simulate cutting force in sampling points using finite element analysis software ABAQUS, the single direction variable stiffness rule can be obtained. The variable stiffness matrix has been propose by analyzing multi-directions coupling variable stiffness rule. Combining with the three direction cutting force value, the reasonability of existing processing parameters can be verified and the optimized cutting parameters can be designed.

8. Assessing multiscale complexity of short heart rate variability series through a model-based linear approach

Science.gov (United States)

Porta, Alberto; Bari, Vlasta; Ranuzzi, Giovanni; De Maria, Beatrice; Baselli, Giuseppe

2017-09-01

We propose a multiscale complexity (MSC) method assessing irregularity in assigned frequency bands and being appropriate for analyzing the short time series. It is grounded on the identification of the coefficients of an autoregressive model, on the computation of the mean position of the poles generating the components of the power spectral density in an assigned frequency band, and on the assessment of its distance from the unit circle in the complex plane. The MSC method was tested on simulations and applied to the short heart period (HP) variability series recorded during graded head-up tilt in 17 subjects (age from 21 to 54 years, median = 28 years, 7 females) and during paced breathing protocols in 19 subjects (age from 27 to 35 years, median = 31 years, 11 females) to assess the contribution of time scales typical of the cardiac autonomic control, namely in low frequency (LF, from 0.04 to 0.15 Hz) and high frequency (HF, from 0.15 to 0.5 Hz) bands to the complexity of the cardiac regulation. The proposed MSC technique was compared to a traditional model-free multiscale method grounded on information theory, i.e., multiscale entropy (MSE). The approach suggests that the reduction of HP variability complexity observed during graded head-up tilt is due to a regularization of the HP fluctuations in LF band via a possible intervention of sympathetic control and the decrement of HP variability complexity observed during slow breathing is the result of the regularization of the HP variations in both LF and HF bands, thus implying the action of physiological mechanisms working at time scales even different from that of respiration. MSE did not distinguish experimental conditions at time scales larger than 1. Over a short time series MSC allows a more insightful association between cardiac control complexity and physiological mechanisms modulating cardiac rhythm compared to a more traditional tool such as MSE.

9. Improved variable reduction in partial least squares modelling based on predictive-property-ranked variables and adaptation of partial least squares complexity.

Science.gov (United States)

Andries, Jan P M; Vander Heyden, Yvan; Buydens, Lutgarde M C

2011-10-31

The calibration performance of partial least squares for one response variable (PLS1) can be improved by elimination of uninformative variables. Many methods are based on so-called predictive variable properties, which are functions of various PLS-model parameters, and which may change during the variable reduction process. In these methods variable reduction is made on the variables ranked in descending order for a given variable property. The methods start with full spectrum modelling. Iteratively, until a specified number of remaining variables is reached, the variable with the smallest property value is eliminated; a new PLS model is calculated, followed by a renewed ranking of the variables. The Stepwise Variable Reduction methods using Predictive-Property-Ranked Variables are denoted as SVR-PPRV. In the existing SVR-PPRV methods the PLS model complexity is kept constant during the variable reduction process. In this study, three new SVR-PPRV methods are proposed, in which a possibility for decreasing the PLS model complexity during the variable reduction process is build in. Therefore we denote our methods as PPRVR-CAM methods (Predictive-Property-Ranked Variable Reduction with Complexity Adapted Models). The selective and predictive abilities of the new methods are investigated and tested, using the absolute PLS regression coefficients as predictive property. They were compared with two modifications of existing SVR-PPRV methods (with constant PLS model complexity) and with two reference methods: uninformative variable elimination followed by either a genetic algorithm for PLS (UVE-GA-PLS) or an interval PLS (UVE-iPLS). The performance of the methods is investigated in conjunction with two data sets from near-infrared sources (NIR) and one simulated set. The selective and predictive performances of the variable reduction methods are compared statistically using the Wilcoxon signed rank test. The three newly developed PPRVR-CAM methods were able to retain

10. Are Model Transferability And Complexity Antithetical? Insights From Validation of a Variable-Complexity Empirical Snow Model in Space and Time

Science.gov (United States)

Lute, A. C.; Luce, Charles H.

2017-11-01

The related challenges of predictions in ungauged basins and predictions in ungauged climates point to the need to develop environmental models that are transferable across both space and time. Hydrologic modeling has historically focused on modelling one or only a few basins using highly parameterized conceptual or physically based models. However, model parameters and structures have been shown to change significantly when calibrated to new basins or time periods, suggesting that model complexity and model transferability may be antithetical. Empirical space-for-time models provide a framework within which to assess model transferability and any tradeoff with model complexity. Using 497 SNOTEL sites in the western U.S., we develop space-for-time models of April 1 SWE and Snow Residence Time based on mean winter temperature and cumulative winter precipitation. The transferability of the models to new conditions (in both space and time) is assessed using non-random cross-validation tests with consideration of the influence of model complexity on transferability. As others have noted, the algorithmic empirical models transfer best when minimal extrapolation in input variables is required. Temporal split-sample validations use pseudoreplicated samples, resulting in the selection of overly complex models, which has implications for the design of hydrologic model validation tests. Finally, we show that low to moderate complexity models transfer most successfully to new conditions in space and time, providing empirical confirmation of the parsimony principal.

11. Holocene glacier variability: three case studies using an intermediate-complexity climate model

NARCIS (Netherlands)

Weber, S.L.; Oerlemans, J.

2003-01-01

Synthetic glacier length records are generated for the Holocene epoch using a process-based glacier model coupled to the intermediate-complexity climate model ECBilt. The glacier model consists of a massbalance component and an ice-flow component. The climate model is forced by the insolation change

12. Simulation of groundwater flow in the glacial aquifer system of northeastern Wisconsin with variable model complexity

Science.gov (United States)

Juckem, Paul F.; Clark, Brian R.; Feinstein, Daniel T.

2017-05-04

The U.S. Geological Survey, National Water-Quality Assessment seeks to map estimated intrinsic susceptibility of the glacial aquifer system of the conterminous United States. Improved understanding of the hydrogeologic characteristics that explain spatial patterns of intrinsic susceptibility, commonly inferred from estimates of groundwater age distributions, is sought so that methods used for the estimation process are properly equipped. An important step beyond identifying relevant hydrogeologic datasets, such as glacial geology maps, is to evaluate how incorporation of these resources into process-based models using differing levels of detail could affect resulting simulations of groundwater age distributions and, thus, estimates of intrinsic susceptibility.This report describes the construction and calibration of three groundwater-flow models of northeastern Wisconsin that were developed with differing levels of complexity to provide a framework for subsequent evaluations of the effects of process-based model complexity on estimations of groundwater age distributions for withdrawal wells and streams. Preliminary assessments, which focused on the effects of model complexity on simulated water levels and base flows in the glacial aquifer system, illustrate that simulation of vertical gradients using multiple model layers improves simulated heads more in low-permeability units than in high-permeability units. Moreover, simulation of heterogeneous hydraulic conductivity fields in coarse-grained and some fine-grained glacial materials produced a larger improvement in simulated water levels in the glacial aquifer system compared with simulation of uniform hydraulic conductivity within zones. The relation between base flows and model complexity was less clear; however, the relation generally seemed to follow a similar pattern as water levels. Although increased model complexity resulted in improved calibrations, future application of the models using simulated particle

13. The meganism behind internally generated centennial-to-millennial scale climate variability in an earth system model of intermediate complexity

NARCIS (Netherlands)

Friedrich, T.; Timmermann, A.; Menviel, L.; Elison Timm, O.; Mouchet, A.; Roche, D.M.V.A.P.

2010-01-01

The mechanism triggering centennial-to-millennial-scale variability of the Atlantic Meridional Overturning Circulation (AMOC) in the earth system model of intermediate complexity LOVECLIM is investigated. It is found that for several climate boundary conditions such as low obliquity values (∼22.1 )

14. Surface Complexation Modeling in Variable Charge Soils: Charge Characterization by Potentiometric Titration

Directory of Open Access Journals (Sweden)

Giuliano Marchi

2015-10-01

Full Text Available ABSTRACT Intrinsic equilibrium constants of 17 representative Brazilian Oxisols were estimated from potentiometric titration measuring the adsorption of H+ and OH− on amphoteric surfaces in suspensions of varying ionic strength. Equilibrium constants were fitted to two surface complexation models: diffuse layer and constant capacitance. The former was fitted by calculating total site concentration from curve fitting estimates and pH-extrapolation of the intrinsic equilibrium constants to the PZNPC (hand calculation, considering one and two reactive sites, and by the FITEQL software. The latter was fitted only by FITEQL, with one reactive site. Soil chemical and physical properties were correlated to the intrinsic equilibrium constants. Both surface complexation models satisfactorily fit our experimental data, but for results at low ionic strength, optimization did not converge in FITEQL. Data were incorporated in Visual MINTEQ and they provide a modeling system that can predict protonation-dissociation reactions in the soil surface under changing environmental conditions.

15. Complex Variables throughout the Curriculum

Science.gov (United States)

D'Angelo, John P.

2017-01-01

We offer many specific detailed examples, several of which are new, that instructors can use (in lecture or as student projects) to revitalize the role of complex variables throughout the curriculum. We conclude with three primary recommendations: revise the syllabus of Calculus II to allow early introductions of complex numbers and linear…

16. Simulating Complex, Cold-region Process Interactions Using a Multi-scale, Variable-complexity Hydrological Model

Science.gov (United States)

Marsh, C.; Pomeroy, J. W.; Wheater, H. S.

2017-12-01

Accurate management of water resources is necessary for social, economic, and environmental sustainability worldwide. In locations with seasonal snowcovers, the accurate prediction of these water resources is further complicated due to frozen soils, solid-phase precipitation, blowing snow transport, and snowcover-vegetation-atmosphere interactions. Complex process interactions and feedbacks are a key feature of hydrological systems and may result in emergent phenomena, i.e., the arising of novel and unexpected properties within a complex system. One example is the feedback associated with blowing snow redistribution, which can lead to drifts that cause locally-increased soil moisture, thus increasing plant growth that in turn subsequently impacts snow redistribution, creating larger drifts. Attempting to simulate these emergent behaviours is a significant challenge, however, and there is concern that process conceptualizations within current models are too incomplete to represent the needed interactions. An improved understanding of the role of emergence in hydrological systems often requires high resolution distributed numerical hydrological models that incorporate the relevant process dynamics. The Canadian Hydrological Model (CHM) provides a novel tool for examining cold region hydrological systems. Key features include efficient terrain representation, allowing simulations at various spatial scales, reduced computational overhead, and a modular process representation allowing for an alternative-hypothesis framework. Using both physics-based and conceptual process representations sourced from long term process studies and the current cold regions literature allows for comparison of process representations and importantly, their ability to produce emergent behaviours. Examining the system in a holistic, process-based manner can hopefully derive important insights and aid in development of improved process representations.

17. Surface Complexation Modeling in Variable Charge Soils: Prediction of Cadmium Adsorption

Directory of Open Access Journals (Sweden)

Giuliano Marchi

2015-10-01

Full Text Available ABSTRACT Intrinsic equilibrium constants for 22 representative Brazilian Oxisols were estimated from a cadmium adsorption experiment. Equilibrium constants were fitted to two surface complexation models: diffuse layer and constant capacitance. Intrinsic equilibrium constants were optimized by FITEQL and by hand calculation using Visual MINTEQ in sweep mode, and Excel spreadsheets. Data from both models were incorporated into Visual MINTEQ. Constants estimated by FITEQL and incorporated in Visual MINTEQ software failed to predict observed data accurately. However, FITEQL raw output data rendered good results when predicted values were directly compared with observed values, instead of incorporating the estimated constants into Visual MINTEQ. Intrinsic equilibrium constants optimized by hand calculation and incorporated in Visual MINTEQ reliably predicted Cd adsorption reactions on soil surfaces under changing environmental conditions.

18. Incorporating soil variability in continental soil water modelling: a trade-off between data availability and model complexity

Science.gov (United States)

Peeters, L.; Crosbie, R. S.; Doble, R.; van Dijk, A. I. J. M.

2012-04-01

Developing a continental land surface model implies finding a balance between the complexity in representing the system processes and the availability of reliable data to drive, parameterise and calibrate the model. While a high level of process understanding at plot or catchment scales may warrant a complex model, such data is not available at the continental scale. This data sparsity is especially an issue for the Australian Water Resources Assessment system, AWRA-L, a land-surface model designed to estimate the components of the water balance for the Australian continent. This study focuses on the conceptualization and parametrization of the soil drainage process in AWRA-L. Traditionally soil drainage is simulated with Richards' equation, which is highly non-linear. As general analytic solutions are not available, this equation is usually solved numerically. In AWRA-L however, we introduce a simpler function based on simulation experiments that solve Richards' equation. In the simplified function soil drainage rate, the ratio of drainage (D) over storage (S), decreases exponentially with relative water content. This function is controlled by three parameters, the soil water storage at field capacity (SFC), the drainage fraction at field capacity (KFC) and a drainage function exponent (β). [ ] D- -S- S = KF C exp - β (1 - SFC ) To obtain spatially variable estimates of these three parameters, the Atlas of Australian Soils is used, which lists soil hydraulic properties for each soil profile type. For each soil profile type in the Atlas, 10 days of draining an initially fully saturated, freely draining soil is simulated using HYDRUS-1D. With field capacity defined as the volume of water in the soil after 1 day, the remaining parameters can be obtained by fitting the AWRA-L soil drainage function to the HYDRUS-1D results. This model conceptualisation fully exploits the data available in the Atlas of Australian Soils, without the need to solve the non

19. Solution Strategies and Achievement in Dutch Complex Arithmetic: Latent Variable Modeling of Change

Science.gov (United States)

Hickendorff, Marian; Heiser, Willem J.; van Putten, Cornelis M.; Verhelst, Norman D.

2009-01-01

In the Netherlands, national assessments at the end of primary school (Grade 6) show a decline of achievement on problems of complex or written arithmetic over the last two decades. The present study aims at contributing to an explanation of the large achievement decrease on complex division, by investigating the strategies students used in…

20. Predictive-property-ranked variable reduction in partial least squares modelling with final complexity adapted models: comparison of properties for ranking.

Science.gov (United States)

Andries, Jan P M; Vander Heyden, Yvan; Buydens, Lutgarde M C

2013-01-14

The calibration performance of partial least squares regression for one response (PLS1) can be improved by eliminating uninformative variables. Many variable-reduction methods are based on so-called predictor-variable properties or predictive properties, which are functions of various PLS-model parameters, and which may change during the steps of the variable-reduction process. Recently, a new predictive-property-ranked variable reduction method with final complexity adapted models, denoted as PPRVR-FCAM or simply FCAM, was introduced. It is a backward variable elimination method applied on the predictive-property-ranked variables. The variable number is first reduced, with constant PLS1 model complexity A, until A variables remain, followed by a further decrease in PLS complexity, allowing the final selection of small numbers of variables. In this study for three data sets the utility and effectiveness of six individual and nine combined predictor-variable properties are investigated, when used in the FCAM method. The individual properties include the absolute value of the PLS1 regression coefficient (REG), the significance of the PLS1 regression coefficient (SIG), the norm of the loading weight (NLW) vector, the variable importance in the projection (VIP), the selectivity ratio (SR), and the squared correlation coefficient of a predictor variable with the response y (COR). The selective and predictive performances of the models resulting from the use of these properties are statistically compared using the one-tailed Wilcoxon signed rank test. The results indicate that the models, resulting from variable reduction with the FCAM method, using individual or combined properties, have similar or better predictive abilities than the full spectrum models. After mean-centring of the data, REG and SIG, provide low numbers of informative variables, with a meaning relevant to the response, and lower than the other individual properties, while the predictive abilities are

1. In silico, experimental, mechanistic model for extended-release felodipine disposition exhibiting complex absorption and a highly variable food interaction.

Directory of Open Access Journals (Sweden)

Sean H J Kim

Full Text Available The objective of this study was to develop and explore new, in silico experimental methods for deciphering complex, highly variable absorption and food interaction pharmacokinetics observed for a modified-release drug product. Toward that aim, we constructed an executable software analog of study participants to whom product was administered orally. The analog is an object- and agent-oriented, discrete event system, which consists of grid spaces and event mechanisms that map abstractly to different physiological features and processes. Analog mechanisms were made sufficiently complicated to achieve prespecified similarity criteria. An equation-based gastrointestinal transit model with nonlinear mixed effects analysis provided a standard for comparison. Subject-specific parameterizations enabled each executed analog's plasma profile to mimic features of the corresponding six individual pairs of subject plasma profiles. All achieved prespecified, quantitative similarity criteria, and outperformed the gastrointestinal transit model estimations. We observed important subject-specific interactions within the simulation and mechanistic differences between the two models. We hypothesize that mechanisms, events, and their causes occurring during simulations had counterparts within the food interaction study: they are working, evolvable, concrete theories of dynamic interactions occurring within individual subjects. The approach presented provides new, experimental strategies for unraveling the mechanistic basis of complex pharmacological interactions and observed variability.

2. The mechanism behind internally generated centennial-to-millennial scale climate variability in an earth system model of intermediate complexity

Directory of Open Access Journals (Sweden)

T. Friedrich

2010-08-01

Full Text Available The mechanism triggering centennial-to-millennial-scale variability of the Atlantic Meridional Overturning Circulation (AMOC in the earth system model of intermediate complexity LOVECLIM is investigated. It is found that for several climate boundary conditions such as low obliquity values (~22.1° or LGM-albedo, internally generated centennial-to-millennial-scale variability occurs in the North Atlantic region. Stochastic excitations of the density-driven overturning circulation in the Nordic Seas can create regional sea-ice anomalies and a subsequent reorganization of the atmospheric circulation. The resulting remote atmospheric anomalies over the Hudson Bay can release freshwater pulses into the Labrador Sea and significantly increase snow fall in this region leading to a subsequent reduction of convective activity. The millennial-scale AMOC oscillations disappear if LGM bathymetry (with closed Hudson Bay is prescribed or if freshwater pulses are suppressed artificially. Furthermore, our study documents the process of the AMOC recovery as well as the global marine and terrestrial carbon cycle response to centennial-to-millennial-scale AMOC variability.

3. Complex matrix model duality

International Nuclear Information System (INIS)

Brown, T.W.

2010-11-01

The same complex matrix model calculates both tachyon scattering for the c=1 non-critical string at the self-dual radius and certain correlation functions of half-BPS operators in N=4 super- Yang-Mills. It is dual to another complex matrix model where the couplings of the first model are encoded in the Kontsevich-like variables of the second. The duality between the theories is mirrored by the duality of their Feynman diagrams. Analogously to the Hermitian Kontsevich- Penner model, the correlation functions of the second model can be written as sums over discrete points in subspaces of the moduli space of punctured Riemann surfaces. (orig.)

4. Complex matrix model duality

Energy Technology Data Exchange (ETDEWEB)

Brown, T.W.

2010-11-15

The same complex matrix model calculates both tachyon scattering for the c=1 non-critical string at the self-dual radius and certain correlation functions of half-BPS operators in N=4 super- Yang-Mills. It is dual to another complex matrix model where the couplings of the first model are encoded in the Kontsevich-like variables of the second. The duality between the theories is mirrored by the duality of their Feynman diagrams. Analogously to the Hermitian Kontsevich- Penner model, the correlation functions of the second model can be written as sums over discrete points in subspaces of the moduli space of punctured Riemann surfaces. (orig.)

5. Analytic functions of several complex variables

CERN Document Server

Gunning, Robert C

2009-01-01

The theory of analytic functions of several complex variables enjoyed a period of remarkable development in the middle part of the twentieth century. After initial successes by Poincaré and others in the late 19th and early 20th centuries, the theory encountered obstacles that prevented it from growing quickly into an analogue of the theory for functions of one complex variable. Beginning in the 1930s, initially through the work of Oka, then H. Cartan, and continuing with the work of Grauert, Remmert, and others, new tools were introduced into the theory of several complex variables that resol

6. Harmonic and complex analysis in several variables

CERN Document Server

Krantz, Steven G

2017-01-01

Authored by a ranking authority in harmonic analysis of several complex variables, this book embodies a state-of-the-art entrée at the intersection of two important fields of research: complex analysis and harmonic analysis. Written with the graduate student in mind, it is assumed that the reader has familiarity with the basics of complex analysis of one and several complex variables as well as with real and functional analysis. The monograph is largely self-contained and develops the harmonic analysis of several complex variables from the first principles. The text includes copious examples, explanations, an exhaustive bibliography for further reading, and figures that illustrate the geometric nature of the subject. Each chapter ends with an exercise set. Additionally, each chapter begins with a prologue, introducing the reader to the subject matter that follows; capsules presented in each section give perspective and a spirited launch to the segment; preludes help put ideas into context. Mathematicians and...

7. Korean Conference on Several Complex Variables

CERN Document Server

Byun, Jisoo; Gaussier, Hervé; Hirachi, Kengo; Kim, Kang-Tae; Shcherbina, Nikolay

2015-01-01

This volume includes 28 chapters by authors who are leading researchers of the world describing many of the up-to-date aspects in the field of several complex variables (SCV). These contributions are based upon their presentations at the 10th Korean Conference on Several Complex Variables (KSCV10), held as a satellite conference to the International Congress of Mathematicians (ICM) 2014 in Seoul, Korea. SCV has been the term for multidimensional complex analysis, one of the central research areas in mathematics. Studies over time have revealed a variety of rich, intriguing, new knowledge in complex analysis and geometry of analytic spaces and holomorphic functions which were "hidden" in the case of complex dimension one. These new theories have significant intersections with algebraic geometry, differential geometry, partial differential equations, dynamics, functional analysis and operator theory, and sheaves and cohomology, as well as the traditional analysis of holomorphic functions in all dimensions. This...

8. Variable importance in latent variable regression models

NARCIS (Netherlands)

Kvalheim, O.M.; Arneberg, R.; Bleie, O.; Rajalahti, T.; Smilde, A.K.; Westerhuis, J.A.

2014-01-01

The quality and practical usefulness of a regression model are a function of both interpretability and prediction performance. This work presents some new graphical tools for improved interpretation of latent variable regression models that can also assist in improved algorithms for variable

9. Lectures on counterexamples in several complex variables

CERN Document Server

Fornæss, John Erik

2007-01-01

Counterexamples are remarkably effective for understanding the meaning, and the limitations, of mathematical results. Fornæss and Stensønes look at some of the major ideas of several complex variables by considering counterexamples to what might seem like reasonable variations or generalizations. The first part of the book reviews some of the basics of the theory, in a self-contained introduction to several complex variables. The counterexamples cover a variety of important topics: the Levi problem, plurisubharmonic functions, Monge-Ampère equations, CR geometry, function theory, and the \\bar\\

10. Function theory of several complex variables

CERN Document Server

Krantz, Steven G

2001-01-01

The theory of several complex variables can be studied from several different perspectives. In this book, Steven Krantz approaches the subject from the point of view of a classical analyst, emphasizing its function-theoretic aspects. He has taken particular care to write the book with the student in mind, with uniformly extensive and helpful explanations, numerous examples, and plentiful exercises of varying difficulty. In the spirit of a student-oriented text, Krantz begins with an introduction to the subject, including an insightful comparison of analysis of several complex variables with th

11. Applied complex variables for scientists and engineers

CERN Document Server

Kwok, Yue Kuen

2010-01-01

This introduction to complex variable methods begins by carefully defining complex numbers and analytic functions, and proceeds to give accounts of complex integration, Taylor series, singularities, residues and mappings. Both algebraic and geometric tools are employed to provide the greatest understanding, with many diagrams illustrating the concepts introduced. The emphasis is laid on understanding the use of methods, rather than on rigorous proofs. Throughout the text, many of the important theoretical results in complex function theory are followed by relevant and vivid examples in physical sciences. This second edition now contains 350 stimulating exercises of high quality, with solutions given to many of them. Material has been updated and additional proofs on some of the important theorems in complex function theory are now included, e.g. the Weierstrass–Casorati theorem. The book is highly suitable for students wishing to learn the elements of complex analysis in an applied context.

12. Several complex variables and Banach algebras

International Nuclear Information System (INIS)

Allan, G.R.

1976-01-01

This paper aims to present certain applications of the theory of holomorphic functions of several complex variables to the study of commutative Banach algebras. The material falls into the following sections: (A) Introcution to Banach algebras (this will not presuppose any knowledge of the subject); (B) Groups of differential forms (mainly concerned with setting up a useful language); (C) Polynomially convex domains. (D) Holomorphic functional calculus for Banach algebras; (E) Some applications of the functional calculus. (author)

13. Partial differential equations in several complex variables

CERN Document Server

Chen, So-Chin

2001-01-01

This book is intended both as an introductory text and as a reference book for those interested in studying several complex variables in the context of partial differential equations. In the last few decades, significant progress has been made in the fields of Cauchy-Riemann and tangential Cauchy-Riemann operators. This book gives an up-to-date account of the theories for these equations and their applications. The background material in several complex variables is developed in the first three chapters, leading to the Levi problem. The next three chapters are devoted to the solvability and regularity of the Cauchy-Riemann equations using Hilbert space techniques. The authors provide a systematic study of the Cauchy-Riemann equations and the \\bar\\partial-Neumann problem, including L^2 existence theorems on pseudoconvex domains, \\frac 12-subelliptic estimates for the \\bar\\partial-Neumann problems on strongly pseudoconvex domains, global regularity of \\bar\\partial on more general pseudoconvex domains, boundary ...

14. Equation-free and variable free modeling for complex/multiscale systems. Coarse-grained computation in science and engineering using fine-grained models

Energy Technology Data Exchange (ETDEWEB)

Kevrekidis, Ioannis G. [Princeton Univ., NJ (United States)

2017-02-01

The work explored the linking of modern developing machine learning techniques (manifold learning and in particular diffusion maps) with traditional PDE modeling/discretization/scientific computation techniques via the equation-free methodology developed by the PI. The result (in addition to several PhD degrees, two of them by CSGF Fellows) was a sequence of strong developments - in part on the algorithmic side, linking data mining with scientific computing, and in part on applications, ranging from PDE discretizations to molecular dynamics and complex network dynamics.

15. Modeling Complex Time Limits

Directory of Open Access Journals (Sweden)

Oleg Svatos

2013-01-01

Full Text Available In this paper we analyze complexity of time limits we can find especially in regulated processes of public administration. First we review the most popular process modeling languages. There is defined an example scenario based on the current Czech legislature which is then captured in discussed process modeling languages. Analysis shows that the contemporary process modeling languages support capturing of the time limit only partially. This causes troubles to analysts and unnecessary complexity of the models. Upon unsatisfying results of the contemporary process modeling languages we analyze the complexity of the time limits in greater detail and outline lifecycles of a time limit using the multiple dynamic generalizations pattern. As an alternative to the popular process modeling languages there is presented PSD process modeling language, which supports the defined lifecycles of a time limit natively and therefore allows keeping the models simple and easy to understand.

16. Classification of complex polynomial vector fields in one complex variable

DEFF Research Database (Denmark)

Branner, Bodil; Dias, Kealey

2010-01-01

This paper classifies the global structure of monic and centred one-variable complex polynomial vector fields. The classification is achieved by means of combinatorial and analytic data. More specifically, given a polynomial vector field, we construct a combinatorial invariant, describing...... the topology, and a set of analytic invariants, describing the geometry. Conversely, given admissible combinatorial and analytic data sets, we show using surgery the existence of a unique monic and centred polynomial vector field realizing the given invariants. This is the content of the Structure Theorem......, the main result of the paper. This result is an extension and refinement of Douady et al. (Champs de vecteurs polynomiaux sur C. Unpublished manuscript) classification of the structurally stable polynomial vector fields. We further review some general concepts for completeness and show that vector fields...

17. Simulation in Complex Modelling

DEFF Research Database (Denmark)

Nicholas, Paul; Ramsgaard Thomsen, Mette; Tamke, Martin

2017-01-01

This paper will discuss the role of simulation in extended architectural design modelling. As a framing paper, the aim is to present and discuss the role of integrated design simulation and feedback between design and simulation in a series of projects under the Complex Modelling framework. Complex...... performance, engage with high degrees of interdependency and allow the emergence of design agency and feedback between the multiple scales of architectural construction. This paper presents examples for integrated design simulation from a series of projects including Lace Wall, A Bridge Too Far and Inflated...... Restraint developed for the research exhibition Complex Modelling, Meldahls Smedie Gallery, Copenhagen in 2016. Where the direct project aims and outcomes have been reported elsewhere, the aim for this paper is to discuss overarching strategies for working with design integrated simulation....

18. Modeling Complex Systems

CERN Document Server

Boccara, Nino

2010-01-01

Modeling Complex Systems, 2nd Edition, explores the process of modeling complex systems, providing examples from such diverse fields as ecology, epidemiology, sociology, seismology, and economics. It illustrates how models of complex systems are built and provides indispensable mathematical tools for studying their dynamics. This vital introductory text is useful for advanced undergraduate students in various scientific disciplines, and serves as an important reference book for graduate students and young researchers. This enhanced second edition includes: . -recent research results and bibliographic references -extra footnotes which provide biographical information on cited scientists who have made significant contributions to the field -new and improved worked-out examples to aid a student’s comprehension of the content -exercises to challenge the reader and complement the material Nino Boccara is also the author of Essentials of Mathematica: With Applications to Mathematics and Physics (Springer, 2007).

19. Modeling Complex Systems

International Nuclear Information System (INIS)

Schreckenberg, M

2004-01-01

This book by Nino Boccara presents a compilation of model systems commonly termed as 'complex'. It starts with a definition of the systems under consideration and how to build up a model to describe the complex dynamics. The subsequent chapters are devoted to various categories of mean-field type models (differential and recurrence equations, chaos) and of agent-based models (cellular automata, networks and power-law distributions). Each chapter is supplemented by a number of exercises and their solutions. The table of contents looks a little arbitrary but the author took the most prominent model systems investigated over the years (and up until now there has been no unified theory covering the various aspects of complex dynamics). The model systems are explained by looking at a number of applications in various fields. The book is written as a textbook for interested students as well as serving as a comprehensive reference for experts. It is an ideal source for topics to be presented in a lecture on dynamics of complex systems. This is the first book on this 'wide' topic and I have long awaited such a book (in fact I planned to write it myself but this is much better than I could ever have written it!). Only section 6 on cellular automata is a little too limited to the author's point of view and one would have expected more about the famous Domany-Kinzel model (and more accurate citation!). In my opinion this is one of the best textbooks published during the last decade and even experts can learn a lot from it. Hopefully there will be an actualization after, say, five years since this field is growing so quickly. The price is too high for students but this, unfortunately, is the normal case today. Nevertheless I think it will be a great success! (book review)

20. Eutrophication Modeling Using Variable Chlorophyll Approach

International Nuclear Information System (INIS)

Abdolabadi, H.; Sarang, A.; Ardestani, M.; Mahjoobi, E.

2016-01-01

In this study, eutrophication was investigated in Lake Ontario to identify the interactions among effective drivers. The complexity of such phenomenon was modeled using a system dynamics approach based on a consideration of constant and variable stoichiometric ratios. The system dynamics approach is a powerful tool for developing object-oriented models to simulate complex phenomena that involve feedback effects. Utilizing stoichiometric ratios is a method for converting the concentrations of state variables. During the physical segmentation of the model, Lake Ontario was divided into two layers, i.e., the epilimnion and hypolimnion, and differential equations were developed for each layer. The model structure included 16 state variables related to phytoplankton, herbivorous zooplankton, carnivorous zooplankton, ammonium, nitrate, dissolved phosphorus, and particulate and dissolved carbon in the epilimnion and hypolimnion during a time horizon of one year. The results of several tests to verify the model, close to 1 Nash-Sutcliff coefficient (0.98), the data correlation coefficient (0.98), and lower standard errors (0.96), have indicated well-suited model’s efficiency. The results revealed that there were significant differences in the concentrations of the state variables in constant and variable stoichiometry simulations. Consequently, the consideration of variable stoichiometric ratios in algae and nutrient concentration simulations may be applied in future modeling studies to enhance the accuracy of the results and reduce the likelihood of inefficient control policies.

1. Modeling complexes of modeled proteins.

Science.gov (United States)

Anishchenko, Ivan; Kundrotas, Petras J; Vakser, Ilya A

2017-03-01

Structural characterization of proteins is essential for understanding life processes at the molecular level. However, only a fraction of known proteins have experimentally determined structures. This fraction is even smaller for protein-protein complexes. Thus, structural modeling of protein-protein interactions (docking) primarily has to rely on modeled structures of the individual proteins, which typically are less accurate than the experimentally determined ones. Such "double" modeling is the Grand Challenge of structural reconstruction of the interactome. Yet it remains so far largely untested in a systematic way. We present a comprehensive validation of template-based and free docking on a set of 165 complexes, where each protein model has six levels of structural accuracy, from 1 to 6 Å C α RMSD. Many template-based docking predictions fall into acceptable quality category, according to the CAPRI criteria, even for highly inaccurate proteins (5-6 Å RMSD), although the number of such models (and, consequently, the docking success rate) drops significantly for models with RMSD > 4 Å. The results show that the existing docking methodologies can be successfully applied to protein models with a broad range of structural accuracy, and the template-based docking is much less sensitive to inaccuracies of protein models than the free docking. Proteins 2017; 85:470-478. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

2. Predictive Surface Complexation Modeling

Energy Technology Data Exchange (ETDEWEB)

Sverjensky, Dimitri A. [Johns Hopkins Univ., Baltimore, MD (United States). Dept. of Earth and Planetary Sciences

2016-11-29

Surface complexation plays an important role in the equilibria and kinetics of processes controlling the compositions of soilwaters and groundwaters, the fate of contaminants in groundwaters, and the subsurface storage of CO2 and nuclear waste. Over the last several decades, many dozens of individual experimental studies have addressed aspects of surface complexation that have contributed to an increased understanding of its role in natural systems. However, there has been no previous attempt to develop a model of surface complexation that can be used to link all the experimental studies in order to place them on a predictive basis. Overall, my research has successfully integrated the results of the work of many experimentalists published over several decades. For the first time in studies of the geochemistry of the mineral-water interface, a practical predictive capability for modeling has become available. The predictive correlations developed in my research now enable extrapolations of experimental studies to provide estimates of surface chemistry for systems not yet studied experimentally and for natural and anthropogenically perturbed systems.

3. Polystochastic Models for Complexity

CERN Document Server

Iordache, Octavian

2010-01-01

This book is devoted to complexity understanding and management, considered as the main source of efficiency and prosperity for the next decades. Divided into six chapters, the book begins with a presentation of basic concepts as complexity, emergence and closure. The second chapter looks to methods and introduces polystochastic models, the wave equation, possibilities and entropy. The third chapter focusing on physical and chemical systems analyzes flow-sheet synthesis, cyclic operations of separation, drug delivery systems and entropy production. Biomimetic systems represent the main objective of the fourth chapter. Case studies refer to bio-inspired calculation methods, to the role of artificial genetic codes, neural networks and neural codes for evolutionary calculus and for evolvable circuits as biomimetic devices. The fifth chapter, taking its inspiration from systems sciences and cognitive sciences looks to engineering design, case base reasoning methods, failure analysis, and multi-agent manufacturing...

4. Loop quantum cosmology with complex Ashtekar variables

International Nuclear Information System (INIS)

Achour, Jibril Ben; Grain, Julien; Noui, Karim

2015-01-01

We construct and study loop quantum cosmology (LQC) when the Barbero–Immirzi parameter takes the complex value γ=±i. We refer to this new approach to quantum cosmology as complex LQC. This formulation is obtained via an analytic continuation of the Hamiltonian constraint (with no inverse volume corrections) from real γ to γ=±i, in the simple case of a flat FLRW Universe coupled to a massless scalar field with no cosmological constant. For this, we first compute the non-local curvature operator (defined by the trace of the holonomy of the connection around a fundamental plaquette) evaluated in an arbitrary spin j representation, and find a new close formula for its expression. This allows us to define explicitly a one parameter family of regularizations of the Hamiltonian constraint in LQC, parametrized by the spin j. It is immediate to see that any spin j regularization leads to a bouncing scenario. Then, motivated in particular by previous results on black hole thermodynamics, we perform the analytic continuation of the Hamiltonian constraint to values of the Barbero–Immirzi parameter given by γ=±i and to spins j=(1/2)(−1+is) where s is real. Even if the area spectrum then becomes continuous, we show that the complex LQC defined in this way does also replace the initial big-bang singularity by a big-bounce. In addition to this, the maximal density and the minimal volume of the Universe are obviously independent of γ. Furthermore, the dynamics before and after the bounce is not symmetrical anymore, which makes a clear distinction between these two phases of the evolution of the Universe. (paper)

5. Modeling the Variable Heliopause Location

Science.gov (United States)

Hensley, Kerry

2018-03-01

In 2012, Voyager 1 zipped across the heliopause. Five and a half years later, Voyager 2 still hasnt followed its twin into interstellar space. Can models of the heliopause location help determine why?How Far to the Heliopause?Artists conception of the heliosphere with the important structures and boundaries labeled. [NASA/Goddard/Walt Feimer]As our solar system travels through the galaxy, the solar outflow pushes against the surrounding interstellar medium, forming a bubble called the heliosphere. The edge of this bubble, the heliopause, is the outermost boundary of our solar system, where the solar wind and the interstellar medium meet. Since the solar outflow is highly variable, the heliopause is constantly moving with the motion driven by changes inthe Sun.NASAs twin Voyager spacecraft were poisedto cross the heliopause after completingtheir tour of the outer planets in the 1980s. In 2012, Voyager 1 registered a sharp increase in the density of interstellar particles, indicating that the spacecraft had passed out of the heliosphere and into the interstellar medium. The slower-moving Voyager 2 was set to pierce the heliopause along a different trajectory, but so far no measurements have shown that the spacecraft has bid farewell to oursolar system.In a recent study, ateam of scientists led by Haruichi Washimi (Kyushu University, Japan and CSPAR, University of Alabama-Huntsville) argues that models of the heliosphere can help explain this behavior. Because the heliopause location is controlled by factors that vary on many spatial and temporal scales, Washimiand collaborators turn to three-dimensional, time-dependent magnetohydrodynamics simulations of the heliosphere. In particular, they investigate how the position of the heliopause along the trajectories of Voyager 1 and Voyager 2 changes over time.Modeled location of the heliopause along the paths of Voyagers 1 (blue) and 2 (orange). Click for a closer look. The red star indicates the location at which Voyager

6. Functions of a complex variable and some of their applications

CERN Document Server

Fuchs, B A; Sneddon, I N; Ulam, S

1961-01-01

Functions of a Complex Variable and Some of Their Applications, Volume 1, discusses the fundamental ideas of the theory of functions of a complex variable. The book is the result of a complete rewriting and revision of a translation of the second (1957) Russian edition. Numerous changes and additions have been made, both in the text and in the solutions of the Exercises. The book begins with a review of arithmetical operations with complex numbers. Separate chapters discuss the fundamentals of complex analysis; the concept of conformal transformations; the most important of the elementary fun

7. Inferring topologies of complex networks with hidden variables.

Science.gov (United States)

Wu, Xiaoqun; Wang, Weihan; Zheng, Wei Xing

2012-10-01

Network topology plays a crucial role in determining a network's intrinsic dynamics and function, thus understanding and modeling the topology of a complex network will lead to greater knowledge of its evolutionary mechanisms and to a better understanding of its behaviors. In the past few years, topology identification of complex networks has received increasing interest and wide attention. Many approaches have been developed for this purpose, including synchronization-based identification, information-theoretic methods, and intelligent optimization algorithms. However, inferring interaction patterns from observed dynamical time series is still challenging, especially in the absence of knowledge of nodal dynamics and in the presence of system noise. The purpose of this work is to present a simple and efficient approach to inferring the topologies of such complex networks. The proposed approach is called "piecewise partial Granger causality." It measures the cause-effect connections of nonlinear time series influenced by hidden variables. One commonly used testing network, two regular networks with a few additional links, and small-world networks are used to evaluate the performance and illustrate the influence of network parameters on the proposed approach. Application to experimental data further demonstrates the validity and robustness of our method.

8. A One-Layer Recurrent Neural Network for Constrained Complex-Variable Convex Optimization.

Science.gov (United States)

Qin, Sitian; Feng, Jiqiang; Song, Jiahui; Wen, Xingnan; Xu, Chen

2018-03-01

In this paper, based on calculus and penalty method, a one-layer recurrent neural network is proposed for solving constrained complex-variable convex optimization. It is proved that for any initial point from a given domain, the state of the proposed neural network reaches the feasible region in finite time and converges to an optimal solution of the constrained complex-variable convex optimization finally. In contrast to existing neural networks for complex-variable convex optimization, the proposed neural network has a lower model complexity and better convergence. Some numerical examples and application are presented to substantiate the effectiveness of the proposed neural network.

9. Gait variability: methods, modeling and meaning

Directory of Open Access Journals (Sweden)

Hausdorff Jeffrey M

2005-07-01

Full Text Available Abstract The study of gait variability, the stride-to-stride fluctuations in walking, offers a complementary way of quantifying locomotion and its changes with aging and disease as well as a means of monitoring the effects of therapeutic interventions and rehabilitation. Previous work has suggested that measures of gait variability may be more closely related to falls, a serious consequence of many gait disorders, than are measures based on the mean values of other walking parameters. The Current JNER series presents nine reports on the results of recent investigations into gait variability. One novel method for collecting unconstrained, ambulatory data is reviewed, and a primer on analysis methods is presented along with a heuristic approach to summarizing variability measures. In addition, the first studies of gait variability in animal models of neurodegenerative disease are described, as is a mathematical model of human walking that characterizes certain complex (multifractal features of the motor control's pattern generator. Another investigation demonstrates that, whereas both healthy older controls and patients with a higher-level gait disorder walk more slowly in reduced lighting, only the latter's stride variability increases. Studies of the effects of dual tasks suggest that the regulation of the stride-to-stride fluctuations in stride width and stride time may be influenced by attention loading and may require cognitive input. Finally, a report of gait variability in over 500 subjects, probably the largest study of this kind, suggests how step width variability may relate to fall risk. Together, these studies provide new insights into the factors that regulate the stride-to-stride fluctuations in walking and pave the way for expanded research into the control of gait and the practical application of measures of gait variability in the clinical setting.

10. Complexity Variability Assessment of Nonlinear Time-Varying Cardiovascular Control

Science.gov (United States)

Valenza, Gaetano; Citi, Luca; Garcia, Ronald G.; Taylor, Jessica Noggle; Toschi, Nicola; Barbieri, Riccardo

2017-02-01

The application of complex systems theory to physiology and medicine has provided meaningful information about the nonlinear aspects underlying the dynamics of a wide range of biological processes and their disease-related aberrations. However, no studies have investigated whether meaningful information can be extracted by quantifying second-order moments of time-varying cardiovascular complexity. To this extent, we introduce a novel mathematical framework termed complexity variability, in which the variance of instantaneous Lyapunov spectra estimated over time serves as a reference quantifier. We apply the proposed methodology to four exemplary studies involving disorders which stem from cardiology, neurology and psychiatry: Congestive Heart Failure (CHF), Major Depression Disorder (MDD), Parkinson’s Disease (PD), and Post-Traumatic Stress Disorder (PTSD) patients with insomnia under a yoga training regime. We show that complexity assessments derived from simple time-averaging are not able to discern pathology-related changes in autonomic control, and we demonstrate that between-group differences in measures of complexity variability are consistent across pathologies. Pathological states such as CHF, MDD, and PD are associated with an increased complexity variability when compared to healthy controls, whereas wellbeing derived from yoga in PTSD is associated with lower time-variance of complexity.

11. Statistical screening of input variables in a complex computer code

International Nuclear Information System (INIS)

Krieger, T.J.

1982-01-01

A method is presented for ''statistical screening'' of input variables in a complex computer code. The object is to determine the ''effective'' or important input variables by estimating the relative magnitudes of their associated sensitivity coefficients. This is accomplished by performing a numerical experiment consisting of a relatively small number of computer runs with the code followed by a statistical analysis of the results. A formula for estimating the sensitivity coefficients is derived. Reference is made to an earlier work in which the method was applied to a complex reactor code with good results

12. Coevolution of variability models and related software artifacts

DEFF Research Database (Denmark)

Passos, Leonardo; Teixeira, Leopoldo; Dinztner, Nicolas

2015-01-01

models coevolve with other artifact types, we study a large and complex real-world variant-rich software system: the Linux kernel. Specifically, we extract variability-coevolution patterns capturing changes in the variability model of the Linux kernel with subsequent changes in Makefiles and C source...

13. Several Complex Variables are Better than Just One

formally analogous to the definition of a differentiable function of one real variable ..... We are now in a position to define one of the central concepts of complex .... idea would be to read the two expository articles [10, 7], and then proceed to the ...

14. Appropriate complexity landscape modeling

NARCIS (Netherlands)

Larsen, Laurel G.; Eppinga, Maarten B.; Passalacqua, Paola; Getz, Wayne M.; Rose, Kenneth A.; Liang, Man

Advances in computing technology, new and ongoing restoration initiatives, concerns about climate change's effects, and the increasing interdisciplinarity of research have encouraged the development of landscape-scale mechanistic models of coupled ecological-geophysical systems. However,

15. New complex variable meshless method for advection—diffusion problems

International Nuclear Information System (INIS)

Wang Jian-Fei; Cheng Yu-Min

2013-01-01

In this paper, an improved complex variable meshless method (ICVMM) for two-dimensional advection—diffusion problems is developed based on improved complex variable moving least-square (ICVMLS) approximation. The equivalent functional of two-dimensional advection—diffusion problems is formed, the variation method is used to obtain the equation system, and the penalty method is employed to impose the essential boundary conditions. The difference method for two-point boundary value problems is used to obtain the discrete equations. Then the corresponding formulas of the ICVMM for advection—diffusion problems are presented. Two numerical examples with different node distributions are used to validate and inestigate the accuracy and efficiency of the new method in this paper. It is shown that ICVMM is very effective for advection—diffusion problems, and has a good convergent character, accuracy, and computational efficiency

16. Variable structure control of complex systems analysis and design

CERN Document Server

Yan, Xing-Gang; Edwards, Christopher

2017-01-01

This book systematizes recent research work on variable-structure control. It is self-contained, presenting necessary mathematical preliminaries so that the theoretical developments can be easily understood by a broad readership. The text begins with an introduction to the fundamental ideas of variable-structure control pertinent to their application in complex nonlinear systems. In the core of the book, the authors lay out an approach, suitable for a large class of systems, that deals with system uncertainties with nonlinear bounds. Its treatment of complex systems in which limited measurement information is available makes the results developed convenient to implement. Various case-study applications are described, from aerospace, through power systems to river pollution control with supporting simulations to aid the transition from mathematical theory to engineering practicalities. The book addresses systems with nonlinearities, time delays and interconnections and considers issues such as stabilization, o...

17. Linear latent variable models: the lava-package

DEFF Research Database (Denmark)

Holst, Klaus Kähler; Budtz-Jørgensen, Esben

2013-01-01

are implemented including robust standard errors for clustered correlated data, multigroup analyses, non-linear parameter constraints, inference with incomplete data, maximum likelihood estimation with censored and binary observations, and instrumental variable estimators. In addition an extensive simulation......An R package for specifying and estimating linear latent variable models is presented. The philosophy of the implementation is to separate the model specification from the actual data, which leads to a dynamic and easy way of modeling complex hierarchical structures. Several advanced features...

18. Complexation of Plutonium (IV) With Sulfate At Variable Temperatures

International Nuclear Information System (INIS)

Y. Xia; J.I. Friese; D.A. Moore; P.P. Bachelor; L. Rao

2006-01-01

The complexation of plutonium(IV) with sulfate at variable temperatures has been investigated by solvent extraction method. A NaBrO 3 solution was used as holding oxidant to maintain the plutonium(IV) oxidation state throughout the experiments. The distribution ratio of Pu(IV) between the organic and aqueous phases was found to decrease as the concentrations of sulfate were increased. Stability constants of the 1:1 and 1:2 Pu(IV)-HSO 4 - complexes, dominant in the aqueous phase, were calculated from the effect of [HSO 4 - ] on the distribution ratio. The enthalpy and entropy of complexation were calculated from the stability constants at different temperatures using the Van't Hoff equation

19. Independent variable complexity for regional regression of the flow duration curve in ungauged basins

Science.gov (United States)

Fouad, Geoffrey; Skupin, André; Hope, Allen

2016-04-01

The flow duration curve (FDC) is one of the most widely used tools to quantify streamflow. Its percentile flows are often required for water resource applications, but these values must be predicted for ungauged basins with insufficient or no streamflow data. Regional regression is a commonly used approach for predicting percentile flows that involves identifying hydrologic regions and calibrating regression models to each region. The independent variables used to describe the physiographic and climatic setting of the basins are a critical component of regional regression, yet few studies have investigated their effect on resulting predictions. In this study, the complexity of the independent variables needed for regional regression is investigated. Different levels of variable complexity are applied for a regional regression consisting of 918 basins in the US. Both the hydrologic regions and regression models are determined according to the different sets of variables, and the accuracy of resulting predictions is assessed. The different sets of variables include (1) a simple set of three variables strongly tied to the FDC (mean annual precipitation, potential evapotranspiration, and baseflow index), (2) a traditional set of variables describing the average physiographic and climatic conditions of the basins, and (3) a more complex set of variables extending the traditional variables to include statistics describing the distribution of physiographic data and temporal components of climatic data. The latter set of variables is not typically used in regional regression, and is evaluated for its potential to predict percentile flows. The simplest set of only three variables performed similarly to the other more complex sets of variables. Traditional variables used to describe climate, topography, and soil offered little more to the predictions, and the experimental set of variables describing the distribution of basin data in more detail did not improve predictions

20. Epidemic modeling in complex realities.

Science.gov (United States)

Colizza, Vittoria; Barthélemy, Marc; Barrat, Alain; Vespignani, Alessandro

2007-04-01

In our global world, the increasing complexity of social relations and transport infrastructures are key factors in the spread of epidemics. In recent years, the increasing availability of computer power has enabled both to obtain reliable data allowing one to quantify the complexity of the networks on which epidemics may propagate and to envision computational tools able to tackle the analysis of such propagation phenomena. These advances have put in evidence the limits of homogeneous assumptions and simple spatial diffusion approaches, and stimulated the inclusion of complex features and heterogeneities relevant in the description of epidemic diffusion. In this paper, we review recent progresses that integrate complex systems and networks analysis with epidemic modelling and focus on the impact of the various complex features of real systems on the dynamics of epidemic spreading.

1. The Possibility Using the Power Production Function of Complex Variable for Economic Forecasting

Directory of Open Access Journals (Sweden)

2016-09-01

Full Text Available The possibility of dynamic analysis and forecasting production results using the power production functions of complex variables with real coefficients is considered. This model expands the arsenal of instrumental methods and allows multivariate production forecasts which are unattainable by other methods of real variables as the functions of complex variables simulate the production differently in comparison with the models of real variables. The values of coefficients of the power production functions of complex variables can be calculated for each statistical observation. This allows to consider the change of the coefficients over time, to analyze this trend and predict the values of the coefficients for a given term, thereby to predict the form of the production function, which forecasts the operating results. Thus, the model of the production function with variable coefficients is introduced into the scientific circulation. With this model, the inverse problem of forecasting might be solved, such as the determination of the necessary quantities of labor and capital to achieve the desired operational results. The study is based on the principles of the modern methodology of complex-valued economy, one of its sections is the complex-valued patterns of production functions. In the article, the possibility of economic forecasting is tested on the example of the UK economy. The results of this prediction are compared with the forecasts obtained by other methods, which have led to the conclusion about the effectiveness of the proposed approach and the method of forecasting at the macro levels of production systems. A complex-valued power model of the production function is recommended for the multivariate prediction of sustainable production systems — the global economy, the economies of individual countries, major industries and regions.

2. The complex, variable structure of stationary lines in SS433

International Nuclear Information System (INIS)

Falomo, R.; Boksenberg, A.; Tanzi, E.G.; Tarenghi, M.; Treves, A.

1987-01-01

On 1979 June 3-6, a number of spectra of SS433 were obtained using the UCL Image Photon Counting System on the 3.6-m telescope of the European Southern Observatory, La Silla, Chile. The stationary Hα and He I lambdalambda5875, 6678 and 7065 lines have a complex structure which on June 4-5 exhibited a central feature accompanied by two equally displaced (+- 1000 km s -1 ) side components. Variability of the line profile and equivalent width is observed on time-scales as short as a quarter of an hour. (author)

3. Complex, variable structure of stationary lines in SS433

Energy Technology Data Exchange (ETDEWEB)

Falomo, R.; Boksenberg, A.; Tanzi, E.G.; Tarenghi, M.; Treves, A.

1987-01-15

On 1979 June 3-6, a number of spectra of SS433 were obtained using the UCL Image Photon Counting System on the 3.6-m telescope of the European Southern Observatory, La Silla, Chile. The stationary H..cap alpha.. and He I lambdalambda5875, 6678 and 7065 lines have a complex structure which on June 4-5 exhibited a central feature accompanied by two equally displaced (+- 1000 km s/sup -1/) side components. Variability of the line profile and equivalent width is observed on time-scales as short as a quarter of an hour.

4. Complex variables and the Laplace transform for engineers

CERN Document Server

LePage, Wilbur R

2010-01-01

""An excellent text; the best I have found on the subject."" - J. B. Sevart, Department of Mechanical Engineering, University of Wichita""An extremely useful textbook for both formal classes and for self-study."" - Society for Industrial and Applied MathematicsEngineers often do not have time to take a course in complex variable theory as undergraduates, yet is is one of the most important and useful branches of mathematics, with many applications in engineering. This text is designed to remedy that need by supplying graduate engineering students (especially electrical engineering) with a cou

5. Handbook of latent variable and related models

CERN Document Server

Lee, Sik-Yum

2011-01-01

This Handbook covers latent variable models, which are a flexible class of models for modeling multivariate data to explore relationships among observed and latent variables.- Covers a wide class of important models- Models and statistical methods described provide tools for analyzing a wide spectrum of complicated data- Includes illustrative examples with real data sets from business, education, medicine, public health and sociology.- Demonstrates the use of a wide variety of statistical, computational, and mathematical techniques.

6. Comparing flood loss models of different complexity

Science.gov (United States)

Schröter, Kai; Kreibich, Heidi; Vogel, Kristin; Riggelsen, Carsten; Scherbaum, Frank; Merz, Bruno

2013-04-01

Any deliberation on flood risk requires the consideration of potential flood losses. In particular, reliable flood loss models are needed to evaluate cost-effectiveness of mitigation measures, to assess vulnerability, for comparative risk analysis and financial appraisal during and after floods. In recent years, considerable improvements have been made both concerning the data basis and the methodological approaches used for the development of flood loss models. Despite of that, flood loss models remain an important source of uncertainty. Likewise the temporal and spatial transferability of flood loss models is still limited. This contribution investigates the predictive capability of different flood loss models in a split sample cross regional validation approach. For this purpose, flood loss models of different complexity, i.e. based on different numbers of explaining variables, are learned from a set of damage records that was obtained from a survey after the Elbe flood in 2002. The validation of model predictions is carried out for different flood events in the Elbe and Danube river basins in 2002, 2005 and 2006 for which damage records are available from surveys after the flood events. The models investigated are a stage-damage model, the rule based model FLEMOps+r as well as novel model approaches which are derived using data mining techniques of regression trees and Bayesian networks. The Bayesian network approach to flood loss modelling provides attractive additional information concerning the probability distribution of both model predictions and explaining variables.

7. On sampling and modeling complex systems

International Nuclear Information System (INIS)

Marsili, Matteo; Mastromatteo, Iacopo; Roudi, Yasser

2013-01-01

The study of complex systems is limited by the fact that only a few variables are accessible for modeling and sampling, which are not necessarily the most relevant ones to explain the system behavior. In addition, empirical data typically undersample the space of possible states. We study a generic framework where a complex system is seen as a system of many interacting degrees of freedom, which are known only in part, that optimize a given function. We show that the underlying distribution with respect to the known variables has the Boltzmann form, with a temperature that depends on the number of unknown variables. In particular, when the influence of the unknown degrees of freedom on the known variables is not too irregular, the temperature decreases as the number of variables increases. This suggests that models can be predictable only when the number of relevant variables is less than a critical threshold. Concerning sampling, we argue that the information that a sample contains on the behavior of the system is quantified by the entropy of the frequency with which different states occur. This allows us to characterize the properties of maximally informative samples: within a simple approximation, the most informative frequency size distributions have power law behavior and Zipf’s law emerges at the crossover between the under sampled regime and the regime where the sample contains enough statistics to make inferences on the behavior of the system. These ideas are illustrated in some applications, showing that they can be used to identify relevant variables or to select the most informative representations of data, e.g. in data clustering. (paper)

8. Generalized latent variable modeling multilevel, longitudinal, and structural equation models

CERN Document Server

Skrondal, Anders; Rabe-Hesketh, Sophia

2004-01-01

This book unifies and extends latent variable models, including multilevel or generalized linear mixed models, longitudinal or panel models, item response or factor models, latent class or finite mixture models, and structural equation models.

9. A Core Language for Separate Variability Modeling

DEFF Research Database (Denmark)

Iosif-Lazăr, Alexandru Florin; Wasowski, Andrzej; Schaefer, Ina

2014-01-01

Separate variability modeling adds variability to a modeling language without requiring modifications of the language or the supporting tools. We define a core language for separate variability modeling using a single kind of variation point to define transformations of software artifacts in object...... hierarchical dependencies between variation points via copying and flattening. Thus, we reduce a model with intricate dependencies to a flat executable model transformation consisting of simple unconditional local variation points. The core semantics is extremely concise: it boils down to two operational rules...

10. Latent variable models are network models.

Science.gov (United States)

Molenaar, Peter C M

2010-06-01

Cramer et al. present an original and interesting network perspective on comorbidity and contrast this perspective with a more traditional interpretation of comorbidity in terms of latent variable theory. My commentary focuses on the relationship between the two perspectives; that is, it aims to qualify the presumed contrast between interpretations in terms of networks and latent variables.

11. Computational models of complex systems

CERN Document Server

Dabbaghian, Vahid

2014-01-01

Computational and mathematical models provide us with the opportunities to investigate the complexities of real world problems. They allow us to apply our best analytical methods to define problems in a clearly mathematical manner and exhaustively test our solutions before committing expensive resources. This is made possible by assuming parameter(s) in a bounded environment, allowing for controllable experimentation, not always possible in live scenarios. For example, simulation of computational models allows the testing of theories in a manner that is both fundamentally deductive and experimental in nature. The main ingredients for such research ideas come from multiple disciplines and the importance of interdisciplinary research is well recognized by the scientific community. This book provides a window to the novel endeavours of the research communities to present their works by highlighting the value of computational modelling as a research tool when investigating complex systems. We hope that the reader...

12. Complexity-aware simple modeling.

Science.gov (United States)

2018-02-26

Mathematical models continue to be essential for deepening our understanding of biology. On one extreme, simple or small-scale models help delineate general biological principles. However, the parsimony of detail in these models as well as their assumption of modularity and insulation make them inaccurate for describing quantitative features. On the other extreme, large-scale and detailed models can quantitatively recapitulate a phenotype of interest, but have to rely on many unknown parameters, making them often difficult to parse mechanistically and to use for extracting general principles. We discuss some examples of a new approach-complexity-aware simple modeling-that can bridge the gap between the small-scale and large-scale approaches. Copyright © 2018 Elsevier Ltd. All rights reserved.

13. Complex Networks in Psychological Models

Science.gov (United States)

Wedemann, R. S.; Carvalho, L. S. A. V. D.; Donangelo, R.

We develop schematic, self-organizing, neural-network models to describe mechanisms associated with mental processes, by a neurocomputational substrate. These models are examples of real world complex networks with interesting general topological structures. Considering dopaminergic signal-to-noise neuronal modulation in the central nervous system, we propose neural network models to explain development of cortical map structure and dynamics of memory access, and unify different mental processes into a single neurocomputational substrate. Based on our neural network models, neurotic behavior may be understood as an associative memory process in the brain, and the linguistic, symbolic associative process involved in psychoanalytic working-through can be mapped onto a corresponding process of reconfiguration of the neural network. The models are illustrated through computer simulations, where we varied dopaminergic modulation and observed the self-organizing emergent patterns at the resulting semantic map, interpreting them as different manifestations of mental functioning, from psychotic through to normal and neurotic behavior, and creativity.

14. Squeezed states and Hermite polynomials in a complex variable

International Nuclear Information System (INIS)

Ali, S. Twareque; Górska, K.; Horzela, A.; Szafraniec, F. H.

2014-01-01

Following the lines of the recent paper of J.-P. Gazeau and F. H. Szafraniec [J. Phys. A: Math. Theor. 44, 495201 (2011)], we construct here three types of coherent states, related to the Hermite polynomials in a complex variable which are orthogonal with respect to a non-rotationally invariant measure. We investigate relations between these coherent states and obtain the relationship between them and the squeezed states of quantum optics. We also obtain a second realization of the canonical coherent states in the Bargmann space of analytic functions, in terms of a squeezed basis. All this is done in the flavor of the classical approach of V. Bargmann [Commun. Pure Appl. Math. 14, 187 (1961)

15. Geometric theory of functions of a complex variable

CERN Document Server

Goluzin, G M

1969-01-01

This book is based on lectures on geometric function theory given by the author at Leningrad State University. It studies univalent conformal mapping of simply and multiply connected domains, conformal mapping of multiply connected domains onto a disk, applications of conformal mapping to the study of interior and boundary properties of analytic functions, and general questions of a geometric nature dealing with analytic functions. The second Russian edition upon which this English translation is based differs from the first mainly in the expansion of two chapters and in the addition of a long survey of more recent developments. The book is intended for readers who are already familiar with the basics of the theory of functions of one complex variable.

16. Complex fluids modeling and algorithms

CERN Document Server

Saramito, Pierre

2016-01-01

This book presents a comprehensive overview of the modeling of complex fluids, including many common substances, such as toothpaste, hair gel, mayonnaise, liquid foam, cement and blood, which cannot be described by Navier-Stokes equations. It also offers an up-to-date mathematical and numerical analysis of the corresponding equations, as well as several practical numerical algorithms and software solutions for the approximation of the solutions. It discusses industrial (molten plastics, forming process), geophysical (mud flows, volcanic lava, glaciers and snow avalanches), and biological (blood flows, tissues) modeling applications. This book is a valuable resource for undergraduate students and researchers in applied mathematics, mechanical engineering and physics.

17. Modeling Coast Redwood Variable Retention Management Regimes

Science.gov (United States)

John-Pascal Berrill; Kevin O' Hara

2007-01-01

Variable retention is a flexible silvicultural system that provides forest managers with an alternative to clearcutting. While much of the standing volume is removed in one harvesting operation, residual stems are retained to provide structural complexity and wildlife habitat functions, or to accrue volume before removal during subsequent stand entries. The residual...

18. Model complexity control for hydrologic prediction

NARCIS (Netherlands)

Schoups, G.; Van de Giesen, N.C.; Savenije, H.H.G.

2008-01-01

A common concern in hydrologic modeling is overparameterization of complex models given limited and noisy data. This leads to problems of parameter nonuniqueness and equifinality, which may negatively affect prediction uncertainties. A systematic way of controlling model complexity is therefore

19. Stress Intensity Factor for Interface Cracks in Bimaterials Using Complex Variable Meshless Manifold Method

Directory of Open Access Journals (Sweden)

Hongfen Gao

2014-01-01

Full Text Available This paper describes the application of the complex variable meshless manifold method (CVMMM to stress intensity factor analyses of structures containing interface cracks between dissimilar materials. A discontinuous function and the near-tip asymptotic displacement functions are added to the CVMMM approximation using the framework of complex variable moving least-squares (CVMLS approximation. This enables the domain to be modeled by CVMMM without explicitly meshing the crack surfaces. The enriched crack-tip functions are chosen as those that span the asymptotic displacement fields for an interfacial crack. The complex stress intensity factors for bimaterial interfacial cracks were numerically evaluated using the method. Good agreement between the numerical results and the reference solutions for benchmark interfacial crack problems is realized.

20. Spatial variability and parametric uncertainty in performance assessment models

International Nuclear Information System (INIS)

Pensado, Osvaldo; Mancillas, James; Painter, Scott; Tomishima, Yasuo

2011-01-01

The problem of defining an appropriate treatment of distribution functions (which could represent spatial variability or parametric uncertainty) is examined based on a generic performance assessment model for a high-level waste repository. The generic model incorporated source term models available in GoldSim ® , the TDRW code for contaminant transport in sparse fracture networks with a complex fracture-matrix interaction process, and a biosphere dose model known as BDOSE TM . Using the GoldSim framework, several Monte Carlo sampling approaches and transport conceptualizations were evaluated to explore the effect of various treatments of spatial variability and parametric uncertainty on dose estimates. Results from a model employing a representative source and ensemble-averaged pathway properties were compared to results from a model allowing for stochastic variation of transport properties along streamline segments (i.e., explicit representation of spatial variability within a Monte Carlo realization). We concluded that the sampling approach and the definition of an ensemble representative do influence consequence estimates. In the examples analyzed in this paper, approaches considering limited variability of a transport resistance parameter along a streamline increased the frequency of fast pathways resulting in relatively high dose estimates, while those allowing for broad variability along streamlines increased the frequency of 'bottlenecks' reducing dose estimates. On this basis, simplified approaches with limited consideration of variability may suffice for intended uses of the performance assessment model, such as evaluation of site safety. (author)

1. A binary logistic regression model with complex sampling design of ...

African Journals Online (AJOL)

2017-09-03

Sep 3, 2017 ... Bi-variable and multi-variable binary logistic regression model with complex sampling design was fitted. .... Data was entered into STATA-12 and analyzed using. SPSS-21. .... lack of access/too far or costs too much. 35. 1.2.

2. Galactic models with variable spiral structure

International Nuclear Information System (INIS)

James, R.A.; Sellwood, J.A.

1978-01-01

A series of three-dimensional computer simulations of disc galaxies has been run in which the self-consistent potential of the disc stars is supplemented by that arising from a small uniform Population II sphere. The models show variable spiral structure, which is more pronounced for thin discs. In addition, the thin discs form weak bars. In one case variable spiral structure associated with this bar has been seen. The relaxed discs are cool outside resonance regions. (author)

3. Gaussian Mixture Model of Heart Rate Variability

Science.gov (United States)

Costa, Tommaso; Boccignone, Giuseppe; Ferraro, Mario

2012-01-01

Heart rate variability (HRV) is an important measure of sympathetic and parasympathetic functions of the autonomic nervous system and a key indicator of cardiovascular condition. This paper proposes a novel method to investigate HRV, namely by modelling it as a linear combination of Gaussians. Results show that three Gaussians are enough to describe the stationary statistics of heart variability and to provide a straightforward interpretation of the HRV power spectrum. Comparisons have been made also with synthetic data generated from different physiologically based models showing the plausibility of the Gaussian mixture parameters. PMID:22666386

4. Nonparametric Bayesian Modeling of Complex Networks

DEFF Research Database (Denmark)

Schmidt, Mikkel Nørgaard; Mørup, Morten

2013-01-01

an infinite mixture model as running example, we go through the steps of deriving the model as an infinite limit of a finite parametric model, inferring the model parameters by Markov chain Monte Carlo, and checking the model?s fit and predictive performance. We explain how advanced nonparametric models......Modeling structure in complex networks using Bayesian nonparametrics makes it possible to specify flexible model structures and infer the adequate model complexity from the observed data. This article provides a gentle introduction to nonparametric Bayesian modeling of complex networks: Using...

5. Confounding of three binary-variables counterfactual model

OpenAIRE

Liu, Jingwei; Hu, Shuang

2011-01-01

Confounding of three binary-variables counterfactual model is discussed in this paper. According to the effect between the control variable and the covariate variable, we investigate three counterfactual models: the control variable is independent of the covariate variable, the control variable has the effect on the covariate variable and the covariate variable affects the control variable. Using the ancillary information based on conditional independence hypotheses, the sufficient conditions...

6. Modeling Complex Nesting Structures in International Business Research

DEFF Research Database (Denmark)

Nielsen, Bo Bernhard; Nielsen, Sabina

2013-01-01

hierarchical random coefficient models (RCM) are often used for the analysis of multilevel phenomena, IB issues often result in more complex nested structures. This paper illustrates how cross-nested multilevel modeling allowing for predictor variables and cross-level interactions at multiple (crossed) levels...

7. On the growth estimates of entire functions of double complex variables

Directory of Open Access Journals (Sweden)

Sanjib Datta

2017-08-01

Full Text Available Recently Datta et al. (2016 introduced the idea of relative type and relative weak type of entire functions of two complex variables with respect to another entire function of two complex variables and prove some related growth properties of it. In this paper, further we study some growth properties of entire functions of two complex variables on the basis of their relative types and relative weak types as introduced by Datta et al (2016.

8. Variable selection and model choice in geoadditive regression models.

Science.gov (United States)

Kneib, Thomas; Hothorn, Torsten; Tutz, Gerhard

2009-06-01

Model choice and variable selection are issues of major concern in practical regression analyses, arising in many biometric applications such as habitat suitability analyses, where the aim is to identify the influence of potentially many environmental conditions on certain species. We describe regression models for breeding bird communities that facilitate both model choice and variable selection, by a boosting algorithm that works within a class of geoadditive regression models comprising spatial effects, nonparametric effects of continuous covariates, interaction surfaces, and varying coefficients. The major modeling components are penalized splines and their bivariate tensor product extensions. All smooth model terms are represented as the sum of a parametric component and a smooth component with one degree of freedom to obtain a fair comparison between the model terms. A generic representation of the geoadditive model allows us to devise a general boosting algorithm that automatically performs model choice and variable selection.

9. Natural climate variability in a coupled model

International Nuclear Information System (INIS)

Zebiak, S.E.; Cane, M.A.

1990-01-01

Multi-century simulations with a simplified coupled ocean-atmosphere model are described. These simulations reveal an impressive range of variability on decadal and longer time scales, in addition to the dominant interannual el Nino/Southern Oscillation signal that the model originally was designed to simulate. Based on a very large sample of century-long simulations, it is nonetheless possible to identify distinct model parameter sensitivities that are described here in terms of selected indices. Preliminary experiments motivated by general circulation model results for increasing greenhouse gases suggest a definite sensitivity to model global warming. While these results are not definitive, they strongly suggest that coupled air-sea dynamics figure prominently in global change and must be included in models for reliable predictions

10. Assessing the accuracy and stability of variable selection methods for random forest modeling in ecology

Science.gov (United States)

Random forest (RF) modeling has emerged as an important statistical learning method in ecology due to its exceptional predictive performance. However, for large and complex ecological datasets there is limited guidance on variable selection methods for RF modeling. Typically, e...

11. Estimating Catchment-Scale Snowpack Variability in Complex Forested Terrain, Valles Caldera National Preserve, NM

Science.gov (United States)

Harpold, A. A.; Brooks, P. D.; Biederman, J. A.; Swetnam, T.

2011-12-01

Difficulty estimating snowpack variability across complex forested terrain currently hinders the prediction of water resources in the semi-arid Southwestern U.S. Catchment-scale estimates of snowpack variability are necessary for addressing ecological, hydrological, and water resources issues, but are often interpolated from a small number of point-scale observations. In this study, we used LiDAR-derived distributed datasets to investigate how elevation, aspect, topography, and vegetation interact to control catchment-scale snowpack variability. The study area is the Redondo massif in the Valles Caldera National Preserve, NM, a resurgent dome that varies from 2500 to 3430 m and drains from all aspects. Mean LiDAR-derived snow depths from four catchments (2.2 to 3.4 km^2) draining different aspects of the Redondo massif varied by 30%, despite similar mean elevations and mixed conifer forest cover. To better quantify this variability in snow depths we performed a multiple linear regression (MLR) at a 7.3 by 7.3 km study area (5 x 106 snow depth measurements) comprising the four catchments. The MLR showed that elevation explained 45% of the variability in snow depths across the study area, aspect explained 18% (dominated by N-S aspect), and vegetation 2% (canopy density and height). This linear relationship was not transferable to the catchment-scale however, where additional MLR analyses showed the influence of aspect and elevation differed between the catchments. The strong influence of North-South aspect in most catchments indicated that the solar radiation is an important control on snow depth variability. To explore the role of solar radiation, a model was used to generate winter solar forcing index (SFI) values based on the local and remote topography. The SFI was able to explain a large amount of snow depth variability in areas with similar elevation and aspect. Finally, the SFI was modified to include the effects of shading from vegetation (in and out of

12. A Model for Positively Correlated Count Variables

DEFF Research Database (Denmark)

Møller, Jesper; Rubak, Ege Holger

2010-01-01

An α-permanental random field is briefly speaking a model for a collection of non-negative integer valued random variables with positive associations. Though such models possess many appealing probabilistic properties, many statisticians seem unaware of α-permanental random fields...... and their potential applications. The purpose of this paper is to summarize useful probabilistic results, study stochastic constructions and simulation techniques, and discuss some examples of α-permanental random fields. This should provide a useful basis for discussing the statistical aspects in future work....

13. Spatiotemporal modes of climatic variability: building blocks of complex networks?

Czech Academy of Sciences Publication Activity Database

Vejmelka, Martin; Hlinka, Jaroslav; Hartman, David; Paluš, Milan

2012-01-01

Roč. 14, - (2012), s. 14275 ISSN 1607-7962. [European Geosciences Union General Assembly 2012. 22.04.2012-27.04.2012, Vienna] R&D Projects: GA ČR GCP103/11/J068 Institutional support: RVO:67985807 Keywords : climate variability * dimensionality reduction * principal component analysis * surrogate data * climate network Subject RIV: BB - Applied Statistics, Operational Research

14. Fluid Mechanics and Complex Variable Theory: Getting Past the 19th Century

Science.gov (United States)

Newton, Paul K.

2017-01-01

The subject of fluid mechanics is a rich, vibrant, and rapidly developing branch of applied mathematics. Historically, it has developed hand-in-hand with the elegant subject of complex variable theory. The Westmont College NSF-sponsored workshop on the revitalization of complex variable theory in the undergraduate curriculum focused partly on…

15. Environmental versus demographic variability in stochastic predator–prey models

International Nuclear Information System (INIS)

Dobramysl, U; Täuber, U C

2013-01-01

In contrast to the neutral population cycles of the deterministic mean-field Lotka–Volterra rate equations, including spatial structure and stochastic noise in models for predator–prey interactions yields complex spatio-temporal structures associated with long-lived erratic population oscillations. Environmental variability in the form of quenched spatial randomness in the predation rates results in more localized activity patches. Our previous study showed that population fluctuations in rare favorable regions in turn cause a remarkable increase in the asymptotic densities of both predators and prey. Very intriguing features are found when variable interaction rates are affixed to individual particles rather than lattice sites. Stochastic dynamics with demographic variability in conjunction with inheritable predation efficiencies generate non-trivial time evolution for the predation rate distributions, yet with overall essentially neutral optimization. (paper)

16. Hafnium(IV) complexation with oxalate at variable temperatures

Energy Technology Data Exchange (ETDEWEB)

Friend, Mitchell T.; Wall, Nathalie A. [Washington State Univ., Pullmanm, WA (United States). Dept. of Chemistry

2017-08-01

Appropriate management of fission products in the reprocessing of spent nuclear fuel (SNF) is crucial in developing advanced reprocessing schemes. The addition of aqueous phase complexing agents can prevent the co-extraction of these fission products. A solvent extraction technique was used to study the complexation of Hf(IV) - an analog to fission product Zr(IV) - with oxalate at 15, 25, and 35 C in 1 M HClO{sub 4} utilizing a {sup 175+181}Hf radiotracer. The mechanism of the solvent extraction system of 10{sup -5} M Hf(IV) in 1 M HClO{sub 4} to thenoyltrifluoroacetone (TTA) in toluene demonstrated a 4{sup th}-power dependence in both TTA and H{sup +}, with Hf(TTA){sub 4} the only extractable species. The equilibrium constant for the extraction of Hf(TTA){sub 4} was determined to be log K{sub ex}=7.67±0.07 (25±1 C, 1 M HClO{sub 4}). The addition of oxalate to the aqueous phase decreased the distribution ratio, indicating aqueous Hf(IV)-oxalate complex formation. Polynomial fits to the distribution data identified the formation of Hf(ox){sup 2+} and Hf(ox){sub 2(aq)} and their stability constants were measured at 15, 25, and 35 C in 1 M HClO{sub 4}. van't Hoff analysis was used to calculate Δ{sub r}G, Δ{sub r}H, and Δ{sub r}S for these species. Stability constants were observed to increase at higher temperature, an indication that Hf(IV)-oxalate complexation is endothermic and driven by entropy.

17. Complex variables a physical approach with applications and Matlab

CERN Document Server

Krantz, Steven G

2007-01-01

PREFACEBASIC IDEAS Complex ArithmeticAlgebraic and Geometric PropertiesThe Exponential and ApplicationsHOLOMORPHIC AND HARMONIC FUNCTIONS Holomorphic FunctionsHolomorphic and Harmonic Functions Real and Complex Line Integrals Complex DifferentiabilityThe LogarithmTHE CAUCHY THEORY The Cauchy Integral TheoremVariants of the Cauchy Formula The Limitations of the Cauchy FormulaAPPLICATIONS OF THE CAUCHY THEORY The Derivatives of a Holomorphic FunctionThe Zeros of a Holomorphic FunctionISOLATED SINGULARITIES Behavior near an Isolated SingularityExpansion around Singular PointsExamples of Laurent ExpansionsThe Calculus of ResiduesApplications to the Calculation of IntegralsMeromorphic FunctionsTHE ARGUMENT PRINCIPLE Counting Zeros and PolesLocal Geometry of Functions Further Results on Zeros The Maximum PrincipleThe Schwarz LemmaTHE GEOMETRIC THEORY The Idea of a Conformal Mapping Mappings of the DiscLinear Fractional Transformations The Riemann Mapping Theorem Conformal Mappings of AnnuliA Compendium of Useful Co...

18. A Variable Flow Modelling Approach To Military End Strength Planning

Science.gov (United States)

2016-12-01

function. The MLRPS is more complex than the variable flow model as it has to cater for a force structure that is much larger than just the MT branch...essential positions in a Ship’s complement, or by the biggest current deficit in forecast end strength. The model can be adjusted to cater for any of these...is unlikely that the RAN will be able to cater for such an increase in hires, so this scenario is not likely to solve their problem. Each transition

19. Complex state variable- and disturbance observer-based current controllers for AC drives

DEFF Research Database (Denmark)

Dal, Mehmet; Teodorescu, Remus; Blaabjerg, Frede

2013-01-01

In vector-controlled AC drives, the design of current controller is usually based on a machine model defined in synchronous frame coordinate, where the drive performance may be degraded by both the variation of the machine parameters and the cross-coupling between the d- and q-axes components...... of the stator current. In order to improve the current control performance an alternative current control strategy was proposed previously aiming to avoid the undesired cross-coupling and non-linearities between the state variables. These effects are assumed as disturbances arisen in the closed-loop path...... of the parameter and the cross-coupling effect. Moreover, it provides a better performance, smooth and low noisy operation with respect to the complex variable controller....

20. Variable Denticity in Carboxylate Binding to the Uranyl Coordination Complexes

International Nuclear Information System (INIS)

Groenewold, G.S.; De Jong, Wibe A.; Oomens, Jos; Van Stipdonk, Michael J.

2010-01-01

Tris-carboxylate complexes of the uranyl (UO2)2+ cation with acetate and benzoate were generated using electrospray ionization mass spectrometry, and then isolated in a Fourier transformion cyclotron resonance mass spectrometer. Wavelength-selective infrared multiple photon dissociation (IRMPD) of the tris-acetatouranyl anion resulted in a redox elimination of an acetate radical, which was used to generate an IR spectrum that consisted of six prominent absorption bands. These were interpreted with the aid of density functional theory calculations in terms of symmetric and antisymmetric -CO2 stretches of both the monodentate and bidentate acetate, CH3 bending and umbrella vibrations, and a uranyl O-U-O asymmetric stretch. The comparison of the calculated and measured IR spectra indicated that the tris-acetate complex contained two acetate ligands bound in a bidentate fashion, while the third acetate was monodentate. In similar fashion, the tris-benzoate uranyl anion was formed and photodissociated by loss of a benzoate radical, enabling measurement of the infrared spectrum that was in close agreement with that calculated for a structure containing one monodentate, and two bidentate benzoate ligands.

1. Framework for Modelling Multiple Input Complex Aggregations for Interactive Installations

DEFF Research Database (Denmark)

2012-01-01

on fuzzy logic and provides a method for variably balancing interaction and user input with the intention of the artist or director. An experimental design is presented, demonstrating an intuitive interface for parametric modelling of a complex aggregation function. The aggregation function unifies...

2. Osteosarcoma models : understanding complex disease

NARCIS (Netherlands)

2012-01-01

A mesenchymal stem cell (MSC) based osteosarcoma model was established. The model provided evidence for a MSC origin of osteosarcoma. Normal MSCs transformed spontaneously to osteosarcoma-like cells which was always accompanied by genomic instability and loss of the Cdkn2a locus. Accordingly loss of

3. Variable complexity online sequential extreme learning machine, with applications to streamflow prediction

Science.gov (United States)

Lima, Aranildo R.; Hsieh, William W.; Cannon, Alex J.

2017-12-01

In situations where new data arrive continually, online learning algorithms are computationally much less costly than batch learning ones in maintaining the model up-to-date. The extreme learning machine (ELM), a single hidden layer artificial neural network with random weights in the hidden layer, is solved by linear least squares, and has an online learning version, the online sequential ELM (OSELM). As more data become available during online learning, information on the longer time scale becomes available, so ideally the model complexity should be allowed to change, but the number of hidden nodes (HN) remains fixed in OSELM. A variable complexity VC-OSELM algorithm is proposed to dynamically add or remove HN in the OSELM, allowing the model complexity to vary automatically as online learning proceeds. The performance of VC-OSELM was compared with OSELM in daily streamflow predictions at two hydrological stations in British Columbia, Canada, with VC-OSELM significantly outperforming OSELM in mean absolute error, root mean squared error and Nash-Sutcliffe efficiency at both stations.

4. Error-in-variables models in calibration

Science.gov (United States)

Lira, I.; Grientschnig, D.

2017-12-01

In many calibration operations, the stimuli applied to the measuring system or instrument under test are derived from measurement standards whose values may be considered to be perfectly known. In that case, it is assumed that calibration uncertainty arises solely from inexact measurement of the responses, from imperfect control of the calibration process and from the possible inaccuracy of the calibration model. However, the premise that the stimuli are completely known is never strictly fulfilled and in some instances it may be grossly inadequate. Then, error-in-variables (EIV) regression models have to be employed. In metrology, these models have been approached mostly from the frequentist perspective. In contrast, not much guidance is available on their Bayesian analysis. In this paper, we first present a brief summary of the conventional statistical techniques that have been developed to deal with EIV models in calibration. We then proceed to discuss the alternative Bayesian framework under some simplifying assumptions. Through a detailed example about the calibration of an instrument for measuring flow rates, we provide advice on how the user of the calibration function should employ the latter framework for inferring the stimulus acting on the calibrated device when, in use, a certain response is measured.

5. Thermodynamic modeling of complex systems

DEFF Research Database (Denmark)

Liang, Xiaodong

after an oil spill. Engineering thermodynamics could be applied in the state-of-the-art sonar products through advanced artificial technology, if the speed of sound, solubility and density of oil-seawater systems could be satisfactorily modelled. The addition of methanol or glycols into unprocessed well...... is successfully applied to model the phase behaviour of water, chemical and hydrocarbon (oil) containing systems with newly developed pure component parameters for water and chemicals and characterization procedures for petroleum fluids. The performance of the PCSAFT EOS on liquid-liquid equilibria of water...... with hydrocarbons has been under debate for some vii years. An interactive step-wise procedure is proposed to fit the model parameters for small associating fluids by taking the liquid-liquid equilibrium data into account. It is still far away from a simple task to apply PC-SAFT in routine PVT simulations and phase...

6. Role models for complex networks

Science.gov (United States)

Reichardt, J.; White, D. R.

2007-11-01

We present a framework for automatically decomposing (“block-modeling”) the functional classes of agents within a complex network. These classes are represented by the nodes of an image graph (“block model”) depicting the main patterns of connectivity and thus functional roles in the network. Using a first principles approach, we derive a measure for the fit of a network to any given image graph allowing objective hypothesis testing. From the properties of an optimal fit, we derive how to find the best fitting image graph directly from the network and present a criterion to avoid overfitting. The method can handle both two-mode and one-mode data, directed and undirected as well as weighted networks and allows for different types of links to be dealt with simultaneously. It is non-parametric and computationally efficient. The concepts of structural equivalence and modularity are found as special cases of our approach. We apply our method to the world trade network and analyze the roles individual countries play in the global economy.

7. Preliminary Multi-Variable Parametric Cost Model for Space Telescopes

Science.gov (United States)

Stahl, H. Philip; Hendrichs, Todd

2010-01-01

This slide presentation reviews creating a preliminary multi-variable cost model for the contract costs of making a space telescope. There is discussion of the methodology for collecting the data, definition of the statistical analysis methodology, single variable model results, testing of historical models and an introduction of the multi variable models.

8. Modeling variability in porescale multiphase flow experiments

Science.gov (United States)

Ling, Bowen; Bao, Jie; Oostrom, Mart; Battiato, Ilenia; Tartakovsky, Alexandre M.

2017-07-01

Microfluidic devices and porescale numerical models are commonly used to study multiphase flow in biological, geological, and engineered porous materials. In this work, we perform a set of drainage and imbibition experiments in six identical microfluidic cells to study the reproducibility of multiphase flow experiments. We observe significant variations in the experimental results, which are smaller during the drainage stage and larger during the imbibition stage. We demonstrate that these variations are due to sub-porescale geometry differences in microcells (because of manufacturing defects) and variations in the boundary condition (i.e., fluctuations in the injection rate inherent to syringe pumps). Computational simulations are conducted using commercial software STAR-CCM+, both with constant and randomly varying injection rates. Stochastic simulations are able to capture variability in the experiments associated with the varying pump injection rate.

9. Modelling the structure of complex networks

DEFF Research Database (Denmark)

Herlau, Tue

networks has been independently studied as mathematical objects in their own right. As such, there has been both an increased demand for statistical methods for complex networks as well as a quickly growing mathematical literature on the subject. In this dissertation we explore aspects of modelling complex....... The next chapters will treat some of the various symmetries, representer theorems and probabilistic structures often deployed in the modelling complex networks, the construction of sampling methods and various network models. The introductory chapters will serve to provide context for the included written...

10. How to get rid of W: a latent variables approach to modelling spatially lagged variables

NARCIS (Netherlands)

Folmer, H.; Oud, J.

2008-01-01

In this paper we propose a structural equation model (SEM) with latent variables to model spatial dependence. Rather than using the spatial weights matrix W, we propose to use latent variables to represent spatial dependence and spillover effects, of which the observed spatially lagged variables are

11. How to get rid of W : a latent variables approach to modelling spatially lagged variables

NARCIS (Netherlands)

Folmer, Henk; Oud, Johan

2008-01-01

In this paper we propose a structural equation model (SEM) with latent variables to model spatial dependence. Rather than using the spatial weights matrix W, we propose to use latent variables to represent spatial dependence and spillover effects, of which the observed spatially lagged variables are

12. Computational Modeling of Complex Protein Activity Networks

NARCIS (Netherlands)

Schivo, Stefano; Leijten, Jeroen; Karperien, Marcel; Post, Janine N.; Prignet, Claude

2017-01-01

Because of the numerous entities interacting, the complexity of the networks that regulate cell fate makes it impossible to analyze and understand them using the human brain alone. Computational modeling is a powerful method to unravel complex systems. We recently described the development of a

13. Models of complex attitude systems

DEFF Research Database (Denmark)

Sørensen, Bjarne Taulo

search algorithms and structural equation models. The results suggest that evaluative judgments of the importance of production system attributes are generated in a schematic manner, driven by personal value orientations. The effect of personal value orientations was strong and largely unmediated...... that evaluative affect propagates through the system in such a way that the system becomes evaluatively consistent and operates as a schema for the generation of evaluative judgments. In the empirical part of the paper, the causal structure of an attitude system from which people derive their evaluations of pork......Existing research on public attitudes towards agricultural production systems is largely descriptive, abstracting from the processes through which members of the general public generate their evaluations of such systems. The present paper adopts a systems perspective on such evaluations...

14. Relation between task complexity and variability of procedure progression during an emergency operation

International Nuclear Information System (INIS)

Kim, Yochan; Park, Jinkyun; Jung, Wondea

2013-01-01

Highlights: • The relation between task complexity and the variability of procedure progression was investigated. • The two quantitative measures, TACOM and VPP, were applied to this study. • The task complexity was positively related with the operator’s procedural variability. • The VPP measure can be useful for explaining the operator’s behaviors. - Abstract: In this study, the relation between task complexity and variability of procedure progression during an emergency operation was investigated by comparing the two quantitative measures. To this end, the TACOM measure and VPP measure were applied to evaluate the complexity of tasks and variability of procedure progression, respectively. The TACOM scores and VPP scores were obtained for 60 tasks in the OPERA database, and a correlation analysis between two measures and a multiple regression analysis between the sub-measures of the TACOM measure and VPP measure were conducted. The results showed that the TACOM measure is positively associated with the VPP measure, and the abstraction hierarchy complexity mainly affected the variability among the sub-measures of TACOM. From these findings, it was discussed that the task complexity is related to an operator’s procedural variability and VPP measure can be useful for explaining the operator’s behaviors

15. Variability in Second Language Learning: The Roles of Individual Differences, Learning Conditions, and Linguistic Complexity

Science.gov (United States)

Tagarelli, Kaitlyn M.; Ruiz, Simón; Vega, José Luis Moreno; Rebuschat, Patrick

2016-01-01

Second language learning outcomes are highly variable, due to a variety of factors, including individual differences, exposure conditions, and linguistic complexity. However, exactly how these factors interact to influence language learning is unknown. This article examines the relationship between these three variables in language learners.…

16. Bayesian modeling of measurement error in predictor variables

NARCIS (Netherlands)

Fox, Gerardus J.A.; Glas, Cornelis A.W.

2003-01-01

It is shown that measurement error in predictor variables can be modeled using item response theory (IRT). The predictor variables, that may be defined at any level of an hierarchical regression model, are treated as latent variables. The normal ogive model is used to describe the relation between

17. Modeling Psychological Attributes in Psychology – An Epistemological Discussion: Network Analysis vs. Latent Variables

Science.gov (United States)

Guyon, Hervé; Falissard, Bruno; Kop, Jean-Luc

2017-01-01

Network Analysis is considered as a new method that challenges Latent Variable models in inferring psychological attributes. With Network Analysis, psychological attributes are derived from a complex system of components without the need to call on any latent variables. But the ontological status of psychological attributes is not adequately defined with Network Analysis, because a psychological attribute is both a complex system and a property emerging from this complex system. The aim of this article is to reappraise the legitimacy of latent variable models by engaging in an ontological and epistemological discussion on psychological attributes. Psychological attributes relate to the mental equilibrium of individuals embedded in their social interactions, as robust attractors within complex dynamic processes with emergent properties, distinct from physical entities located in precise areas of the brain. Latent variables thus possess legitimacy, because the emergent properties can be conceptualized and analyzed on the sole basis of their manifestations, without exploring the upstream complex system. However, in opposition with the usual Latent Variable models, this article is in favor of the integration of a dynamic system of manifestations. Latent Variables models and Network Analysis thus appear as complementary approaches. New approaches combining Latent Network Models and Network Residuals are certainly a promising new way to infer psychological attributes, placing psychological attributes in an inter-subjective dynamic approach. Pragmatism-realism appears as the epistemological framework required if we are to use latent variables as representations of psychological attributes. PMID:28572780

18. Mesoscale spatiotemporal variability in a complex host-parasite system influenced by intermediate host body size

Directory of Open Access Journals (Sweden)

Sara M. Rodríguez

2017-08-01

Full Text Available Background Parasites are essential components of natural communities, but the factors that generate skewed distributions of parasite occurrences and abundances across host populations are not well understood. Methods Here, we analyse at a seascape scale the spatiotemporal relationships of parasite exposure and host body-size with the proportion of infected hosts (i.e., prevalence and aggregation of parasite burden across ca. 150 km of the coast and over 22 months. We predicted that the effects of parasite exposure on prevalence and aggregation are dependent on host body-sizes. We used an indirect host-parasite interaction in which migratory seagulls, sandy-shore molecrabs, and an acanthocephalan worm constitute the definitive hosts, intermediate hosts, and endoparasite, respectively. In such complex systems, increments in the abundance of definitive hosts imply increments in intermediate hosts’ exposure to the parasite’s dispersive stages. Results Linear mixed-effects models showed a significant, albeit highly variable, positive relationship between seagull density and prevalence. This relationship was stronger for small (cephalothorax length >15 mm than large molecrabs (<15 mm. Independently of seagull density, large molecrabs carried significantly more parasites than small molecrabs. The analysis of the variance-to-mean ratio of per capita parasite burden showed no relationship between seagull density and mean parasite aggregation across host populations. However, the amount of unexplained variability in aggregation was strikingly higher in larger than smaller intermediate hosts. This unexplained variability was driven by a decrease in the mean-variance scaling in heavily infected large molecrabs. Conclusions These results show complex interdependencies between extrinsic and intrinsic population attributes on the structure of host-parasite interactions. We suggest that parasite accumulation—a characteristic of indirect host

19. Mesoscale spatiotemporal variability in a complex host-parasite system influenced by intermediate host body size.

Science.gov (United States)

Rodríguez, Sara M; Valdivia, Nelson

2017-01-01

Parasites are essential components of natural communities, but the factors that generate skewed distributions of parasite occurrences and abundances across host populations are not well understood. Here, we analyse at a seascape scale the spatiotemporal relationships of parasite exposure and host body-size with the proportion of infected hosts (i.e., prevalence) and aggregation of parasite burden across ca. 150 km of the coast and over 22 months. We predicted that the effects of parasite exposure on prevalence and aggregation are dependent on host body-sizes. We used an indirect host-parasite interaction in which migratory seagulls, sandy-shore molecrabs, and an acanthocephalan worm constitute the definitive hosts, intermediate hosts, and endoparasite, respectively. In such complex systems, increments in the abundance of definitive hosts imply increments in intermediate hosts' exposure to the parasite's dispersive stages. Linear mixed-effects models showed a significant, albeit highly variable, positive relationship between seagull density and prevalence. This relationship was stronger for small (cephalothorax length >15 mm) than large molecrabs (analysis of the variance-to-mean ratio of per capita parasite burden showed no relationship between seagull density and mean parasite aggregation across host populations. However, the amount of unexplained variability in aggregation was strikingly higher in larger than smaller intermediate hosts. This unexplained variability was driven by a decrease in the mean-variance scaling in heavily infected large molecrabs. These results show complex interdependencies between extrinsic and intrinsic population attributes on the structure of host-parasite interactions. We suggest that parasite accumulation-a characteristic of indirect host-parasite interactions-and subsequent increasing mortality rates over ontogeny underpin size-dependent host-parasite dynamics.

20. Using a Budyko Derived Index to Evaluate the Internal Hydrological Variability of Catchments in Complex Terrain

Science.gov (United States)

Dominguez, M.

2017-12-01

Headwater catchments in complex terrain typically exhibit significant variations in microclimatic conditions across slopes. This microclimatic variability in turn, modifies land surface properties presumably altering the hydrologic dynamics of these catchments. The extent to which differences in microclimate and land cover dictate the partition of water and energy fluxes within a catchment is still poorly understood. In this study, we attempt to do an assessment of the effects of aspect, elevation and latitude (which are the principal factors that define microclimate conditions) on the hydrologic behavior of the hillslopes within catchments with complex terrain. Using a distributed hydrologic model on a number of catchments at different latitudes, where data is available for calibration and validation, we estimate the different components of the water balance to obtain the aridity index (AI = PET/P) and the evaporative index (EI = AET/P) of each slope for a number of years. We use Budyko's curve as a framework to characterize the inter-annual variability in the hydrologic response of the hillslopes in the studied catchments, developing a hydrologic sensitivity index (HSi) based on the relative change in Budyko's curve components (HSi=ΔAI/ΔEI). With this method, when the HSi values of a given hillslope are larger than 1 the hydrologic behavior of that part of the catchment is considered sensitive to changes in climatic conditions, while values approaching 0 would indicate the opposite. We use this approach as a diagnostic tool to discern the effect of aspect, elevation, and latitude on the hydrologic regime of the slopes in complex terrain catchments and to try to explain observed patterns of land cover conditions on these types of catchments.

1. Modeling Musical Complexity: Commentary on Eerola (2016

Directory of Open Access Journals (Sweden)

Joshua Albrecht

2016-07-01

Full Text Available In his paper, "Expectancy violation and information-theoretic models of melodic complexity," Eerola compares a number of models that correlate musical features of monophonic melodies with participant ratings of perceived melodic complexity. He finds that fairly strong results can be achieved using several different approaches to modeling perceived melodic complexity. The data used in this study are gathered from several previously published studies that use widely different types of melodies, including isochronous folk melodies, isochronous 12-tone rows, and rhythmically complex African folk melodies. This commentary first briefly reviews the article's method and main findings, then suggests a rethinking of the theoretical framework of the study. Finally, some of the methodological issues of the study are discussed.

2. Holonomic functions of several complex variables and singularities of anisotropic Ising n-fold integrals

Science.gov (United States)

Boukraa, S.; Hassani, S.; Maillard, J.-M.

2012-12-01

Focusing on examples associated with holonomic functions, we try to bring new ideas on how to look at phase transitions, for which the critical manifolds are not points but curves depending on a spectral variable, or even fill higher dimensional submanifolds. Lattice statistical mechanics often provides a natural (holonomic) framework to perform singularity analysis with several complex variables that would, in the most general mathematical framework, be too complex, or simply could not be defined. In a learn-by-example approach, considering several Picard-Fuchs systems of two-variables ‘above’ Calabi-Yau ODEs, associated with double hypergeometric series, we show that D-finite (holonomic) functions are actually a good framework for finding properly the singular manifolds. The singular manifolds are found to be genus-zero curves. We then analyze the singular algebraic varieties of quite important holonomic functions of lattice statistical mechanics, the n-fold integrals χ(n), corresponding to the n-particle decomposition of the magnetic susceptibility of the anisotropic square Ising model. In this anisotropic case, we revisit a set of so-called Nickelian singularities that turns out to be a two-parameter family of elliptic curves. We then find the first set of non-Nickelian singularities for χ(3) and χ(4), that also turns out to be rational or elliptic curves. We underline the fact that these singular curves depend on the anisotropy of the Ising model, or, equivalently, that they depend on the spectral parameter of the model. This has important consequences on the physical nature of the anisotropic χ(n)s which appear to be highly composite objects. We address, from a birational viewpoint, the emergence of families of elliptic curves, and that of Calabi-Yau manifolds on such problems. We also address the question of singularities of non-holonomic functions with a discussion on the accumulation of these singular curves for the non-holonomic anisotropic full

3. Holonomic functions of several complex variables and singularities of anisotropic Ising n-fold integrals

International Nuclear Information System (INIS)

Boukraa, S; Hassani, S; Maillard, J-M

2012-01-01

Focusing on examples associated with holonomic functions, we try to bring new ideas on how to look at phase transitions, for which the critical manifolds are not points but curves depending on a spectral variable, or even fill higher dimensional submanifolds. Lattice statistical mechanics often provides a natural (holonomic) framework to perform singularity analysis with several complex variables that would, in the most general mathematical framework, be too complex, or simply could not be defined. In a learn-by-example approach, considering several Picard–Fuchs systems of two-variables ‘above’ Calabi–Yau ODEs, associated with double hypergeometric series, we show that D-finite (holonomic) functions are actually a good framework for finding properly the singular manifolds. The singular manifolds are found to be genus-zero curves. We then analyze the singular algebraic varieties of quite important holonomic functions of lattice statistical mechanics, the n-fold integrals χ (n) , corresponding to the n-particle decomposition of the magnetic susceptibility of the anisotropic square Ising model. In this anisotropic case, we revisit a set of so-called Nickelian singularities that turns out to be a two-parameter family of elliptic curves. We then find the first set of non-Nickelian singularities for χ (3) and χ (4) , that also turns out to be rational or elliptic curves. We underline the fact that these singular curves depend on the anisotropy of the Ising model, or, equivalently, that they depend on the spectral parameter of the model. This has important consequences on the physical nature of the anisotropic χ (n) s which appear to be highly composite objects. We address, from a birational viewpoint, the emergence of families of elliptic curves, and that of Calabi–Yau manifolds on such problems. We also address the question of singularities of non-holonomic functions with a discussion on the accumulation of these singular curves for the non

4. Modeling complex work systems - method meets reality

NARCIS (Netherlands)

van der Veer, Gerrit C.; Hoeve, Machteld; Lenting, Bert

1996-01-01

Modeling an existing task situation is often a first phase in the (re)design of information systems. For complex systems design, this model should consider both the people and the organization involved, the work, and situational aspects. Groupware Task Analysis (GTA) as part of a method for the

5. Fatigue modeling of materials with complex microstructures

DEFF Research Database (Denmark)

Qing, Hai; Mishnaevsky, Leon

2011-01-01

with the phenomenological model of fatigue damage growth. As a result, the fatigue lifetime of materials with complex structures can be determined as a function of the parameters of their structures. As an example, the fatigue lifetimes of wood modeled as a cellular material with multilayered, fiber reinforced walls were...

6. Drag coefficient Variability and Thermospheric models

Science.gov (United States)

Moe, Kenneth

Satellite drag coefficients depend upon a variety of factors: The shape of the satellite, its altitude, the eccentricity of its orbit, the temperature and mean molecular mass of the ambient atmosphere, and the time in the sunspot cycle. At altitudes where the mean free path of the atmospheric molecules is large compared to the dimensions of the satellite, the drag coefficients can be determined from the theory of free-molecule flow. The dependence on altitude is caused by the concentration of atomic oxygen which plays an important role by its ability to adsorb on the satellite surface and thereby affect the energy loss of molecules striking the surface. The eccentricity of the orbit determines the satellite velocity at perigee, and therefore the energy of the incident molecules relative to the energy of adsorption of atomic oxygen atoms on the surface. The temperature of the ambient atmosphere determines the extent to which the random thermal motion of the molecules influences the momentum transfer to the satellite. The time in the sunspot cycle affects the ambient temperature as well as the concentration of atomic oxygen at a particular altitude. Tables and graphs will be used to illustrate the variability of drag coefficients. Before there were any measurements of gas-surface interactions in orbit, Izakov and Cook independently made an excellent estimate that the drag coefficient of satellites of compact shape would be 2.2. That numerical value, independent of altitude, was used by Jacchia to construct his model from the early measurements of satellite drag. Consequently, there is an altitude dependent bias in the model. From the sparce orbital experiments that have been done, we know that the molecules which strike satellite surfaces rebound in a diffuse angular distribution with an energy loss given by the energy accommodation coefficient. As more evidence accumulates on the energy loss, more realistic drag coefficients are being calculated. These improved drag

7. Updating the debate on model complexity

Science.gov (United States)

Simmons, Craig T.; Hunt, Randall J.

2012-01-01

As scientists who are trying to understand a complex natural world that cannot be fully characterized in the field, how can we best inform the society in which we live? This founding context was addressed in a special session, “Complexity in Modeling: How Much is Too Much?” convened at the 2011 Geological Society of America Annual Meeting. The session had a variety of thought-provoking presentations—ranging from philosophy to cost-benefit analyses—and provided some areas of broad agreement that were not evident in discussions of the topic in 1998 (Hunt and Zheng, 1999). The session began with a short introduction during which model complexity was framed borrowing from an economic concept, the Law of Diminishing Returns, and an example of enjoyment derived by eating ice cream. Initially, there is increasing satisfaction gained from eating more ice cream, to a point where the gain in satisfaction starts to decrease, ending at a point when the eater sees no value in eating more ice cream. A traditional view of model complexity is similar—understanding gained from modeling can actually decrease if models become unnecessarily complex. However, oversimplified models—those that omit important aspects of the problem needed to make a good prediction—can also limit and confound our understanding. Thus, the goal of all modeling is to find the “sweet spot” of model sophistication—regardless of whether complexity was added sequentially to an overly simple model or collapsed from an initial highly parameterized framework that uses mathematics and statistics to attain an optimum (e.g., Hunt et al., 2007). Thus, holistic parsimony is attained, incorporating “as simple as possible,” as well as the equally important corollary “but no simpler.”

8. Complex models of nodal nuclear data

International Nuclear Information System (INIS)

Dufek, Jan

2011-01-01

During the core simulations, nuclear data are required at various nodal thermal-hydraulic and fuel burnup conditions. The nodal data are also partially affected by thermal-hydraulic and fuel burnup conditions in surrounding nodes as these change the neutron energy spectrum in the node. Therefore, the nodal data are functions of many parameters (state variables), and the more state variables are considered by the nodal data models the more accurate and flexible the models get. The existing table and polynomial regression models, however, cannot reflect the data dependences on many state variables. As for the table models, the number of mesh points (and necessary lattice calculations) grows exponentially with the number of variables. As for the polynomial regression models, the number of possible multivariate polynomials exceeds the limits of existing selection algorithms that should identify a few dozens of the most important polynomials. Also, the standard scheme of lattice calculations is not convenient for modelling the data dependences on various burnup conditions since it performs only a single or few burnup calculations at fixed nominal conditions. We suggest a new efficient algorithm for selecting the most important multivariate polynomials for the polynomial regression models so that dependences on many state variables can be considered. We also present a new scheme for lattice calculations where a large number of burnup histories are accomplished at varied nodal conditions. The number of lattice calculations being performed and the number of polynomials being analysed are controlled and minimised while building the nodal data models of a required accuracy. (author)

9. Complexity, Modeling, and Natural Resource Management

Directory of Open Access Journals (Sweden)

Paul Cilliers

2013-09-01

Full Text Available This paper contends that natural resource management (NRM issues are, by their very nature, complex and that both scientists and managers in this broad field will benefit from a theoretical understanding of complex systems. It starts off by presenting the core features of a view of complexity that not only deals with the limits to our understanding, but also points toward a responsible and motivating position. Everything we do involves explicit or implicit modeling, and as we can never have comprehensive access to any complex system, we need to be aware both of what we leave out as we model and of the implications of the choice of our modeling framework. One vantage point is never sufficient, as complexity necessarily implies that multiple (independent conceptualizations are needed to engage the system adequately. We use two South African cases as examples of complex systems - restricting the case narratives mainly to the biophysical domain associated with NRM issues - that make the point that even the behavior of the biophysical subsystems themselves are already complex. From the insights into complex systems discussed in the first part of the paper and the lessons emerging from the way these cases have been dealt with in reality, we extract five interrelated generic principles for practicing science and management in complex NRM environments. These principles are then further elucidated using four further South African case studies - organized as two contrasting pairs - and now focusing on the more difficult organizational and social side, comparing the human organizational endeavors in managing such systems.

10. Multifaceted Modelling of Complex Business Enterprises.

Science.gov (United States)

Chakraborty, Subrata; Mengersen, Kerrie; Fidge, Colin; Ma, Lin; Lassen, David

2015-01-01

We formalise and present a new generic multifaceted complex system approach for modelling complex business enterprises. Our method has a strong focus on integrating the various data types available in an enterprise which represent the diverse perspectives of various stakeholders. We explain the challenges faced and define a novel approach to converting diverse data types into usable Bayesian probability forms. The data types that can be integrated include historic data, survey data, and management planning data, expert knowledge and incomplete data. The structural complexities of the complex system modelling process, based on various decision contexts, are also explained along with a solution. This new application of complex system models as a management tool for decision making is demonstrated using a railway transport case study. The case study demonstrates how the new approach can be utilised to develop a customised decision support model for a specific enterprise. Various decision scenarios are also provided to illustrate the versatility of the decision model at different phases of enterprise operations such as planning and control.

11. Multifaceted Modelling of Complex Business Enterprises

Science.gov (United States)

2015-01-01

We formalise and present a new generic multifaceted complex system approach for modelling complex business enterprises. Our method has a strong focus on integrating the various data types available in an enterprise which represent the diverse perspectives of various stakeholders. We explain the challenges faced and define a novel approach to converting diverse data types into usable Bayesian probability forms. The data types that can be integrated include historic data, survey data, and management planning data, expert knowledge and incomplete data. The structural complexities of the complex system modelling process, based on various decision contexts, are also explained along with a solution. This new application of complex system models as a management tool for decision making is demonstrated using a railway transport case study. The case study demonstrates how the new approach can be utilised to develop a customised decision support model for a specific enterprise. Various decision scenarios are also provided to illustrate the versatility of the decision model at different phases of enterprise operations such as planning and control. PMID:26247591

12. Modeling OPC complexity for design for manufacturability

Science.gov (United States)

Gupta, Puneet; Kahng, Andrew B.; Muddu, Swamy; Nakagawa, Sam; Park, Chul-Hong

2005-11-01

Increasing design complexity in sub-90nm designs results in increased mask complexity and cost. Resolution enhancement techniques (RET) such as assist feature addition, phase shifting (attenuated PSM) and aggressive optical proximity correction (OPC) help in preserving feature fidelity in silicon but increase mask complexity and cost. Data volume increase with rise in mask complexity is becoming prohibitive for manufacturing. Mask cost is determined by mask write time and mask inspection time, which are directly related to the complexity of features printed on the mask. Aggressive RET increase complexity by adding assist features and by modifying existing features. Passing design intent to OPC has been identified as a solution for reducing mask complexity and cost in several recent works. The goal of design-aware OPC is to relax OPC tolerances of layout features to minimize mask cost, without sacrificing parametric yield. To convey optimal OPC tolerances for manufacturing, design optimization should drive OPC tolerance optimization using models of mask cost for devices and wires. Design optimization should be aware of impact of OPC correction levels on mask cost and performance of the design. This work introduces mask cost characterization (MCC) that quantifies OPC complexity, measured in terms of fracture count of the mask, for different OPC tolerances. MCC with different OPC tolerances is a critical step in linking design and manufacturing. In this paper, we present a MCC methodology that provides models of fracture count of standard cells and wire patterns for use in design optimization. MCC cannot be performed by designers as they do not have access to foundry OPC recipes and RET tools. To build a fracture count model, we perform OPC and fracturing on a limited set of standard cells and wire configurations with all tolerance combinations. Separately, we identify the characteristics of the layout that impact fracture count. Based on the fracture count (FC) data

13. Sutherland models for complex reflection groups

International Nuclear Information System (INIS)

Crampe, N.; Young, C.A.S.

2008-01-01

There are known to be integrable Sutherland models associated to every real root system, or, which is almost equivalent, to every real reflection group. Real reflection groups are special cases of complex reflection groups. In this paper we associate certain integrable Sutherland models to the classical family of complex reflection groups. Internal degrees of freedom are introduced, defining dynamical spin chains, and the freezing limit taken to obtain static chains of Haldane-Shastry type. By considering the relation of these models to the usual BC N case, we are led to systems with both real and complex reflection groups as symmetries. We demonstrate their integrability by means of new Dunkl operators, associated to wreath products of dihedral groups

14. Minimum-complexity helicopter simulation math model

Science.gov (United States)

Heffley, Robert K.; Mnich, Marc A.

1988-01-01

An example of a minimal complexity simulation helicopter math model is presented. Motivating factors are the computational delays, cost, and inflexibility of the very sophisticated math models now in common use. A helicopter model form is given which addresses each of these factors and provides better engineering understanding of the specific handling qualities features which are apparent to the simulator pilot. The technical approach begins with specification of features which are to be modeled, followed by a build up of individual vehicle components and definition of equations. Model matching and estimation procedures are given which enable the modeling of specific helicopters from basic data sources such as flight manuals. Checkout procedures are given which provide for total model validation. A number of possible model extensions and refinement are discussed. Math model computer programs are defined and listed.

15. Bim Automation: Advanced Modeling Generative Process for Complex Structures

Science.gov (United States)

Banfi, F.; Fai, S.; Brumana, R.

2017-08-01

The new paradigm of the complexity of modern and historic structures, which are characterised by complex forms, morphological and typological variables, is one of the greatest challenges for building information modelling (BIM). Generation of complex parametric models needs new scientific knowledge concerning new digital technologies. These elements are helpful to store a vast quantity of information during the life cycle of buildings (LCB). The latest developments of parametric applications do not provide advanced tools, resulting in time-consuming work for the generation of models. This paper presents a method capable of processing and creating complex parametric Building Information Models (BIM) with Non-Uniform to NURBS) with multiple levels of details (Mixed and ReverseLoD) based on accurate 3D photogrammetric and laser scanning surveys. Complex 3D elements are converted into parametric BIM software and finite element applications (BIM to FEA) using specific exchange formats and new modelling tools. The proposed approach has been applied to different case studies: the BIM of modern structure for the courtyard of West Block on Parliament Hill in Ottawa (Ontario) and the BIM of Masegra Castel in Sondrio (Italy), encouraging the dissemination and interaction of scientific results without losing information during the generative process.

16. Generalized Network Psychometrics : Combining Network and Latent Variable Models

NARCIS (Netherlands)

Epskamp, S.; Rhemtulla, M.; Borsboom, D.

2017-01-01

We introduce the network model as a formal psychometric model, conceptualizing the covariance between psychometric indicators as resulting from pairwise interactions between observable variables in a network structure. This contrasts with standard psychometric models, in which the covariance between

17. Adaptive Synchronization of Fractional Order Complex-Variable Dynamical Networks via Pinning Control

Science.gov (United States)

Ding, Da-Wei; Yan, Jie; Wang, Nian; Liang, Dong

2017-09-01

In this paper, the synchronization of fractional order complex-variable dynamical networks is studied using an adaptive pinning control strategy based on close center degree. Some effective criteria for global synchronization of fractional order complex-variable dynamical networks are derived based on the Lyapunov stability theory. From the theoretical analysis, one concludes that under appropriate conditions, the complex-variable dynamical networks can realize the global synchronization by using the proper adaptive pinning control method. Meanwhile, we succeed in solving the problem about how much coupling strength should be applied to ensure the synchronization of the fractional order complex networks. Therefore, compared with the existing results, the synchronization method in this paper is more general and convenient. This result extends the synchronization condition of the real-variable dynamical networks to the complex-valued field, which makes our research more practical. Finally, two simulation examples show that the derived theoretical results are valid and the proposed adaptive pinning method is effective. Supported by National Natural Science Foundation of China under Grant No. 61201227, National Natural Science Foundation of China Guangdong Joint Fund under Grant No. U1201255, the Natural Science Foundation of Anhui Province under Grant No. 1208085MF93, 211 Innovation Team of Anhui University under Grant Nos. KJTD007A and KJTD001B, and also supported by Chinese Scholarship Council

18. Complex Systems and Self-organization Modelling

CERN Document Server

2009-01-01

The concern of this book is the use of emergent computing and self-organization modelling within various applications of complex systems. The authors focus their attention both on the innovative concepts and implementations in order to model self-organizations, but also on the relevant applicative domains in which they can be used efficiently. This book is the outcome of a workshop meeting within ESM 2006 (Eurosis), held in Toulouse, France in October 2006.

19. Geometric Modelling with a-Complexes

NARCIS (Netherlands)

Gerritsen, B.H.M.; Werff, K. van der; Veltkamp, R.C.

2001-01-01

The shape of real objects can be so complicated, that only a sampling data point set can accurately represent them. Analytic descriptions are too complicated or impossible. Natural objects, for example, can be vague and rough with many holes. For this kind of modelling, a-complexes offer advantages

20. The Kuramoto model in complex networks

Science.gov (United States)

Rodrigues, Francisco A.; Peron, Thomas K. DM.; Ji, Peng; Kurths, Jürgen

2016-01-01

Synchronization of an ensemble of oscillators is an emergent phenomenon present in several complex systems, ranging from social and physical to biological and technological systems. The most successful approach to describe how coherent behavior emerges in these complex systems is given by the paradigmatic Kuramoto model. This model has been traditionally studied in complete graphs. However, besides being intrinsically dynamical, complex systems present very heterogeneous structure, which can be represented as complex networks. This report is dedicated to review main contributions in the field of synchronization in networks of Kuramoto oscillators. In particular, we provide an overview of the impact of network patterns on the local and global dynamics of coupled phase oscillators. We cover many relevant topics, which encompass a description of the most used analytical approaches and the analysis of several numerical results. Furthermore, we discuss recent developments on variations of the Kuramoto model in networks, including the presence of noise and inertia. The rich potential for applications is discussed for special fields in engineering, neuroscience, physics and Earth science. Finally, we conclude by discussing problems that remain open after the last decade of intensive research on the Kuramoto model and point out some promising directions for future research.

1. A cognitive model for software architecture complexity

NARCIS (Netherlands)

Bouwers, E.; Lilienthal, C.; Visser, J.; Van Deursen, A.

2010-01-01

Evaluating the complexity of the architecture of a softwaresystem is a difficult task. Many aspects have to be considered to come to a balanced assessment. Several architecture evaluation methods have been proposed, but very few define a quality model to be used during the evaluation process. In

2. Predictor variable resolution governs modeled soil types

Science.gov (United States)

Soil mapping identifies different soil types by compressing a unique suite of spatial patterns and processes across multiple spatial scales. It can be quite difficult to quantify spatial patterns of soil properties with remotely sensed predictor variables. More specifically, matching the right scale...

3. Quantitative precipitation estimation in complex orography using quasi-vertical profiles of dual polarization radar variables

Science.gov (United States)

Montopoli, Mario; Roberto, Nicoletta; Adirosi, Elisa; Gorgucci, Eugenio; Baldini, Luca

2017-04-01

Weather radars are nowadays a unique tool to estimate quantitatively the rain precipitation near the surface. This is an important task for a plenty of applications. For example, to feed hydrological models, mitigate the impact of severe storms at the ground using radar information in modern warning tools as well as aid the validation studies of satellite-based rain products. With respect to the latter application, several ground validation studies of the Global Precipitation Mission (GPM) products have recently highlighted the importance of accurate QPE from ground-based weather radars. To date, a plenty of works analyzed the performance of various QPE algorithms making use of actual and synthetic experiments, possibly trained by measurement of particle size distributions and electromagnetic models. Most of these studies support the use of dual polarization variables not only to ensure a good level of radar data quality but also as a direct input in the rain estimation equations. Among others, one of the most important limiting factors in radar QPE accuracy is the vertical variability of particle size distribution that affects at different levels, all the radar variables acquired as well as rain rates. This is particularly impactful in mountainous areas where the altitudes of the radar sampling is likely several hundred of meters above the surface. In this work, we analyze the impact of the vertical profile variations of rain precipitation on several dual polarization radar QPE algorithms when they are tested a in complex orography scenario. So far, in weather radar studies, more emphasis has been given to the extrapolation strategies that make use of the signature of the vertical profiles in terms of radar co-polar reflectivity. This may limit the use of the radar vertical profiles when dual polarization QPE algorithms are considered because in that case all the radar variables used in the rain estimation process should be consistently extrapolated at the surface

4. Variable Fidelity Aeroelastic Toolkit - Structural Model, Phase I

Data.gov (United States)

National Aeronautics and Space Administration — The proposed innovation is a methodology to incorporate variable fidelity structural models into steady and unsteady aeroelastic and aeroservoelastic analyses in...

5. Multi-wheat-model ensemble responses to interannual climatic variability

DEFF Research Database (Denmark)

Ruane, A C; Hudson, N I; Asseng, S

2016-01-01

We compare 27 wheat models' yield responses to interannual climate variability, analyzed at locations in Argentina, Australia, India, and The Netherlands as part of the Agricultural Model Intercomparison and Improvement Project (AgMIP) Wheat Pilot. Each model simulated 1981–2010 grain yield, and ......-term warming, suggesting that additional processes differentiate climate change impacts from observed climate variability analogs and motivating continuing analysis and model development efforts.......We compare 27 wheat models' yield responses to interannual climate variability, analyzed at locations in Argentina, Australia, India, and The Netherlands as part of the Agricultural Model Intercomparison and Improvement Project (AgMIP) Wheat Pilot. Each model simulated 1981–2010 grain yield, and we...... evaluate results against the interannual variability of growing season temperature, precipitation, and solar radiation. The amount of information used for calibration has only a minor effect on most models' climate response, and even small multi-model ensembles prove beneficial. Wheat model clusters reveal...

6. ABOUT PSYCHOLOGICAL VARIABLES IN APPLICATION SCORING MODELS

Directory of Open Access Journals (Sweden)

Pablo Rogers

2015-01-01

Full Text Available The purpose of this study is to investigate the contribution of psychological variables and scales suggested by Economic Psychology in predicting individuals’ default. Therefore, a sample of 555 individuals completed a self-completion questionnaire, which was composed of psychological variables and scales. By adopting the methodology of the logistic regression, the following psychological and behavioral characteristics were found associated with the group of individuals in default: a negative dimensions related to money (suffering, inequality and conflict; b high scores on the self-efficacy scale, probably indicating a greater degree of optimism and over-confidence; c buyers classified as compulsive; d individuals who consider it necessary to give gifts to children and friends on special dates, even though many people consider this a luxury; e problems of self-control identified by individuals who drink an average of more than four glasses of alcoholic beverage a day.

7. Complex scaling in the cluster model

International Nuclear Information System (INIS)

Kruppa, A.T.; Lovas, R.G.; Gyarmati, B.

1987-01-01

To find the positions and widths of resonances, a complex scaling of the intercluster relative coordinate is introduced into the resonating-group model. In the generator-coordinate technique used to solve the resonating-group equation the complex scaling requires minor changes in the formulae and code. The finding of the resonances does not need any preliminary guess or explicit reference to any asymptotic prescription. The procedure is applied to the resonances in the relative motion of two ground-state α clusters in 8 Be, but is appropriate for any systems consisting of two clusters. (author) 23 refs.; 5 figs

8. Modeling of anaerobic digestion of complex substrates

International Nuclear Information System (INIS)

Keshtkar, A. R.; Abolhamd, G.; Meyssami, B.; Ghaforian, H.

2003-01-01

A structured mathematical model of anaerobic conversion of complex organic materials in non-ideally cyclic-batch reactors for biogas production has been developed. The model is based on multiple-reaction stoichiometry (enzymatic hydrolysis, acidogenesis, aceto genesis and methano genesis), microbial growth kinetics, conventional material balances in the liquid and gas phases for a cyclic-batch reactor, liquid-gas interactions, liquid-phase equilibrium reactions and a simple mixing model which considers the reactor volume in two separate sections: the flow-through and the retention regions. The dynamic model describes the effects of reactant's distribution resulting from the mixing conditions, time interval of feeding, hydraulic retention time and mixing parameters on the process performance. The model is applied in the simulation of anaerobic digestion of cattle manure under different operating conditions. The model is compared with experimental data and good correlations are obtained

9. Enumeration of Combinatorial Classes of Single Variable Complex Polynomial Vector Fields

DEFF Research Database (Denmark)

Dias, Kealey

A vector field in the space of degree d monic, centered single variable complex polynomial vector fields has a combinatorial structure which can be fully described by a combinatorial data set consisting of an equivalence relation and a marked subset on the integers mod 2d-2, satisfying certain...

10. Multi-Wheat-Model Ensemble Responses to Interannual Climate Variability

Science.gov (United States)

Ruane, Alex C.; Hudson, Nicholas I.; Asseng, Senthold; Camarrano, Davide; Ewert, Frank; Martre, Pierre; Boote, Kenneth J.; Thorburn, Peter J.; Aggarwal, Pramod K.; Angulo, Carlos

2016-01-01

We compare 27 wheat models' yield responses to interannual climate variability, analyzed at locations in Argentina, Australia, India, and The Netherlands as part of the Agricultural Model Intercomparison and Improvement Project (AgMIP) Wheat Pilot. Each model simulated 1981e2010 grain yield, and we evaluate results against the interannual variability of growing season temperature, precipitation, and solar radiation. The amount of information used for calibration has only a minor effect on most models' climate response, and even small multi-model ensembles prove beneficial. Wheat model clusters reveal common characteristics of yield response to climate; however models rarely share the same cluster at all four sites indicating substantial independence. Only a weak relationship (R2 0.24) was found between the models' sensitivities to interannual temperature variability and their response to long-termwarming, suggesting that additional processes differentiate climate change impacts from observed climate variability analogs and motivating continuing analysis and model development efforts.

11. Variable selection in Logistic regression model with genetic algorithm.

Science.gov (United States)

Zhang, Zhongheng; Trevino, Victor; Hoseini, Sayed Shahabuddin; Belciug, Smaranda; Boopathi, Arumugam Manivanna; Zhang, Ping; Gorunescu, Florin; Subha, Velappan; Dai, Songshi

2018-02-01

Variable or feature selection is one of the most important steps in model specification. Especially in the case of medical-decision making, the direct use of a medical database, without a previous analysis and preprocessing step, is often counterproductive. In this way, the variable selection represents the method of choosing the most relevant attributes from the database in order to build a robust learning models and, thus, to improve the performance of the models used in the decision process. In biomedical research, the purpose of variable selection is to select clinically important and statistically significant variables, while excluding unrelated or noise variables. A variety of methods exist for variable selection, but none of them is without limitations. For example, the stepwise approach, which is highly used, adds the best variable in each cycle generally producing an acceptable set of variables. Nevertheless, it is limited by the fact that it commonly trapped in local optima. The best subset approach can systematically search the entire covariate pattern space, but the solution pool can be extremely large with tens to hundreds of variables, which is the case in nowadays clinical data. Genetic algorithms (GA) are heuristic optimization approaches and can be used for variable selection in multivariable regression models. This tutorial paper aims to provide a step-by-step approach to the use of GA in variable selection. The R code provided in the text can be extended and adapted to other data analysis needs.

12. An Improved Estimation Using Polya-Gamma Augmentation for Bayesian Structural Equation Models with Dichotomous Variables

Science.gov (United States)

Kim, Seohyun; Lu, Zhenqiu; Cohen, Allan S.

2018-01-01

Bayesian algorithms have been used successfully in the social and behavioral sciences to analyze dichotomous data particularly with complex structural equation models. In this study, we investigate the use of the Polya-Gamma data augmentation method with Gibbs sampling to improve estimation of structural equation models with dichotomous variables.…

13. Physical modelling of flow and dispersion over complex terrain

Science.gov (United States)

Cermak, J. E.

1984-09-01

Atmospheric motion and dispersion over topography characterized by irregular (or regular) hill-valley or mountain-valley distributions are strongly dependent upon three general sets of variables. These are variables that describe topographic geometry, synoptic-scale winds and surface-air temperature distributions. In addition, pollutant concentration distributions also depend upon location and physical characteristics of the pollutant source. Overall fluid-flow complexity and variability from site to site have stimulated the development and use of physical modelling for determination of flow and dispersion in many wind-engineering applications. Models with length scales as small as 1:12,000 have been placed in boundary-layer wind tunnels to study flows in which forced convection by synoptic winds is of primary significance. Flows driven primarily by forces arising from temperature differences (gravitational or free convection) have been investigated by small-scale physical models placed in an isolated space (gravitational convection chamber). Similarity criteria and facilities for both forced and gravitational-convection flow studies are discussed. Forced-convection modelling is illustrated by application to dispersion of air pollutants by unstable flow near a paper mill in the state of Maryland and by stable flow over Point Arguello, California. Gravitational-convection modelling is demonstrated by a study of drainage flow and pollutant transport from a proposed mining operation in the Rocky Mountains of Colorado. Other studies in which field data are available for comparison with model data are reviewed.

14. A Practical Philosophy of Complex Climate Modelling

Science.gov (United States)

Schmidt, Gavin A.; Sherwood, Steven

2014-01-01

We give an overview of the practice of developing and using complex climate models, as seen from experiences in a major climate modelling center and through participation in the Coupled Model Intercomparison Project (CMIP).We discuss the construction and calibration of models; their evaluation, especially through use of out-of-sample tests; and their exploitation in multi-model ensembles to identify biases and make predictions. We stress that adequacy or utility of climate models is best assessed via their skill against more naive predictions. The framework we use for making inferences about reality using simulations is naturally Bayesian (in an informal sense), and has many points of contact with more familiar examples of scientific epistemology. While the use of complex simulations in science is a development that changes much in how science is done in practice, we argue that the concepts being applied fit very much into traditional practices of the scientific method, albeit those more often associated with laboratory work.

15. Intrinsic Uncertainties in Modeling Complex Systems.

Energy Technology Data Exchange (ETDEWEB)

Cooper, Curtis S; Bramson, Aaron L.; Ames, Arlo L.

2014-09-01

Models are built to understand and predict the behaviors of both natural and artificial systems. Because it is always necessary to abstract away aspects of any non-trivial system being modeled, we know models can potentially leave out important, even critical elements. This reality of the modeling enterprise forces us to consider the prospective impacts of those effects completely left out of a model - either intentionally or unconsidered. Insensitivity to new structure is an indication of diminishing returns. In this work, we represent a hypothetical unknown effect on a validated model as a finite perturba- tion whose amplitude is constrained within a control region. We find robustly that without further constraints, no meaningful bounds can be placed on the amplitude of a perturbation outside of the control region. Thus, forecasting into unsampled regions is a very risky proposition. We also present inherent difficulties with proper time discretization of models and representing in- herently discrete quantities. We point out potentially worrisome uncertainties, arising from math- ematical formulation alone, which modelers can inadvertently introduce into models of complex systems. Acknowledgements This work has been funded under early-career LDRD project #170979, entitled "Quantify- ing Confidence in Complex Systems Models Having Structural Uncertainties", which ran from 04/2013 to 09/2014. We wish to express our gratitude to the many researchers at Sandia who con- tributed ideas to this work, as well as feedback on the manuscript. In particular, we would like to mention George Barr, Alexander Outkin, Walt Beyeler, Eric Vugrin, and Laura Swiler for provid- ing invaluable advice and guidance through the course of the project. We would also like to thank Steven Kleban, Amanda Gonzales, Trevor Manzanares, and Sarah Burwell for their assistance in managing project tasks and resources.

16. Fixed transaction costs and modelling limited dependent variables

NARCIS (Netherlands)

Hempenius, A.L.

1994-01-01

As an alternative to the Tobit model, for vectors of limited dependent variables, I suggest a model, which follows from explicitly using fixed costs, if appropriate of course, in the utility function of the decision-maker.

17. Different Epidemic Models on Complex Networks

International Nuclear Information System (INIS)

Zhang Haifeng; Small, Michael; Fu Xinchu

2009-01-01

Models for diseases spreading are not just limited to SIS or SIR. For instance, for the spreading of AIDS/HIV, the susceptible individuals can be classified into different cases according to their immunity, and similarly, the infected individuals can be sorted into different classes according to their infectivity. Moreover, some diseases may develop through several stages. Many authors have shown that the individuals' relation can be viewed as a complex network. So in this paper, in order to better explain the dynamical behavior of epidemics, we consider different epidemic models on complex networks, and obtain the epidemic threshold for each case. Finally, we present numerical simulations for each case to verify our results.

18. FRAM Modelling Complex Socio-technical Systems

CERN Document Server

Hollnagel, Erik

2012-01-01

There has not yet been a comprehensive method that goes behind 'human error' and beyond the failure concept, and various complicated accidents have accentuated the need for it. The Functional Resonance Analysis Method (FRAM) fulfils that need. This book presents a detailed and tested method that can be used to model how complex and dynamic socio-technical systems work, and understand both why things sometimes go wrong but also why they normally succeed.

19. Complex Constructivism: A Theoretical Model of Complexity and Cognition

Science.gov (United States)

Doolittle, Peter E.

2014-01-01

Education has long been driven by its metaphors for teaching and learning. These metaphors have influenced both educational research and educational practice. Complexity and constructivism are two theories that provide functional and robust metaphors. Complexity provides a metaphor for the structure of myriad phenomena, while constructivism…

20. Analytical Model for LLC Resonant Converter With Variable Duty-Cycle Control

DEFF Research Database (Denmark)

Shen, Yanfeng; Wang, Huai; Blaabjerg, Frede

2016-01-01

are identified and discussed. The proposed model enables a better understanding of the operation characteristics and fast parameter design of the LLC converter, which otherwise cannot be achieved by the existing simulation based methods and numerical models. The results obtained from the proposed model......In LLC resonant converters, the variable duty-cycle control is usually combined with a variable frequency control to widen the gain range, improve the light-load efficiency, or suppress the inrush current during start-up. However, a proper analytical model for the variable duty-cycle controlled LLC...... converter is still not available due to the complexity of operation modes and the nonlinearity of steady-state equations. This paper makes the efforts to develop an analytical model for the LLC converter with variable duty-cycle control. All possible operation models and critical operation characteristics...

1. Defining a Family of Cognitive Diagnosis Models Using Log-Linear Models with Latent Variables

Science.gov (United States)

Henson, Robert A.; Templin, Jonathan L.; Willse, John T.

2009-01-01

This paper uses log-linear models with latent variables (Hagenaars, in "Loglinear Models with Latent Variables," 1993) to define a family of cognitive diagnosis models. In doing so, the relationship between many common models is explicitly defined and discussed. In addition, because the log-linear model with latent variables is a general model for…

2. Adaptive Surface Modeling of Soil Properties in Complex Landforms

Directory of Open Access Journals (Sweden)

Wei Liu

2017-06-01

Full Text Available Abstract: Spatial discontinuity often causes poor accuracy when a single model is used for the surface modeling of soil properties in complex geomorphic areas. Here we present a method for adaptive surface modeling of combined secondary variables to improve prediction accuracy during the interpolation of soil properties (ASM-SP. Using various secondary variables and multiple base interpolation models, ASM-SP was used to interpolate soil K+ in a typical complex geomorphic area (Qinghai Lake Basin, China. Five methods, including inverse distance weighting (IDW, ordinary kriging (OK, and OK combined with different secondary variables (e.g., OK-Landuse, OK-Geology, and OK-Soil, were used to validate the proposed method. The mean error (ME, mean absolute error (MAE, root mean square error (RMSE, mean relative error (MRE, and accuracy (AC were used as evaluation indicators. Results showed that: (1 The OK interpolation result is spatially smooth and has a weak bull's-eye effect, and the IDW has a stronger ‘bull’s-eye’ effect, relatively. They both have obvious deficiencies in depicting spatial variability of soil K+. (2 The methods incorporating combinations of different secondary variables (e.g., ASM-SP, OK-Landuse, OK-Geology, and OK-Soil were associated with lower estimation bias. Compared with IDW, OK, OK-Landuse, OK-Geology, and OK-Soil, the accuracy of ASM-SP increased by 13.63%, 10.85%, 9.98%, 8.32%, and 7.66%, respectively. Furthermore, ASM-SP was more stable, with lower MEs, MAEs, RMSEs, and MREs. (3 ASM-SP presents more details than others in the abrupt boundary, which can render the result consistent with the true secondary variables. In conclusion, ASM-SP can not only consider the nonlinear relationship between secondary variables and soil properties, but can also adaptively combine the advantages of multiple models, which contributes to making the spatial interpolation of soil K+ more reasonable.

3. Complex networks under dynamic repair model

Science.gov (United States)

Chaoqi, Fu; Ying, Wang; Kun, Zhao; Yangjun, Gao

2018-01-01

Invulnerability is not the only factor of importance when considering complex networks' security. It is also critical to have an effective and reasonable repair strategy. Existing research on network repair is confined to the static model. The dynamic model makes better use of the redundant capacity of repaired nodes and repairs the damaged network more efficiently than the static model; however, the dynamic repair model is complex and polytropic. In this paper, we construct a dynamic repair model and systematically describe the energy-transfer relationships between nodes in the repair process of the failure network. Nodes are divided into three types, corresponding to three structures. We find that the strong coupling structure is responsible for secondary failure of the repaired nodes and propose an algorithm that can select the most suitable targets (nodes or links) to repair the failure network with minimal cost. Two types of repair strategies are identified, with different effects under the two energy-transfer rules. The research results enable a more flexible approach to network repair.

4. Hydrogeological controls of variable microbial water quality in a complex subtropical karst system in Northern Vietnam

Science.gov (United States)

Ender, Anna; Goeppert, Nadine; Goldscheider, Nico

2018-05-01

Karst aquifers are particularly vulnerable to bacterial contamination. Especially in developing countries, poor microbial water quality poses a threat to human health. In order to develop effective groundwater protection strategies, a profound understanding of the hydrogeological setting is crucial. The goal of this study was to elucidate the relationships between high spatio-temporal variability in microbial contamination and the hydrogeological conditions. Based on extensive field studies, including mapping, tracer tests and hydrochemical analyses, a conceptual hydrogeological model was developed for a remote and geologically complex karst area in Northern Vietnam called Dong Van. Four different physicochemical water types were identified; the most important ones correspond to the karstified Bac Son and the fractured Na Quan aquifer. Alongside comprehensive investigation of the local hydrogeology, water quality was evaluated by analysis for three types of fecal indicator bacteria (FIB): Escherichia coli, enterococci and thermotolerant coliforms. The major findings are: (1) Springs from the Bac Son formation displayed the highest microbial contamination, while (2) springs that are involved in a polje series with connections to sinking streams were distinctly more contaminated than springs with a catchment area characterized by a more diffuse infiltration. (3) FIB concentrations are dependent on the season, with higher values under wet season conditions. Furthermore, (4) the type of spring capture also affects the water quality. Nevertheless, all studied springs were faecally impacted, along with several shallow wells within the confined karst aquifer. Based on these findings, effective protection strategies can be developed to improve groundwater quality.

5. From complex to simple: interdisciplinary stochastic models

International Nuclear Information System (INIS)

Mazilu, D A; Zamora, G; Mazilu, I

2012-01-01

We present two simple, one-dimensional, stochastic models that lead to a qualitative understanding of very complex systems from biology, nanoscience and social sciences. The first model explains the complicated dynamics of microtubules, stochastic cellular highways. Using the theory of random walks in one dimension, we find analytical expressions for certain physical quantities, such as the time dependence of the length of the microtubules, and diffusion coefficients. The second one is a stochastic adsorption model with applications in surface deposition, epidemics and voter systems. We introduce the ‘empty interval method’ and show sample calculations for the time-dependent particle density. These models can serve as an introduction to the field of non-equilibrium statistical physics, and can also be used as a pedagogical tool to exemplify standard statistical physics concepts, such as random walks or the kinetic approach of the master equation. (paper)

6. Resolving structural variability in network models and the brain.

Directory of Open Access Journals (Sweden)

Florian Klimm

2014-03-01

Full Text Available Large-scale white matter pathways crisscrossing the cortex create a complex pattern of connectivity that underlies human cognitive function. Generative mechanisms for this architecture have been difficult to identify in part because little is known in general about mechanistic drivers of structured networks. Here we contrast network properties derived from diffusion spectrum imaging data of the human brain with 13 synthetic network models chosen to probe the roles of physical network embedding and temporal network growth. We characterize both the empirical and synthetic networks using familiar graph metrics, but presented here in a more complete statistical form, as scatter plots and distributions, to reveal the full range of variability of each measure across scales in the network. We focus specifically on the degree distribution, degree assortativity, hierarchy, topological Rentian scaling, and topological fractal scaling--in addition to several summary statistics, including the mean clustering coefficient, the shortest path-length, and the network diameter. The models are investigated in a progressive, branching sequence, aimed at capturing different elements thought to be important in the brain, and range from simple random and regular networks, to models that incorporate specific growth rules and constraints. We find that synthetic models that constrain the network nodes to be physically embedded in anatomical brain regions tend to produce distributions that are most similar to the corresponding measurements for the brain. We also find that network models hardcoded to display one network property (e.g., assortativity do not in general simultaneously display a second (e.g., hierarchy. This relative independence of network properties suggests that multiple neurobiological mechanisms might be at play in the development of human brain network architecture. Together, the network models that we develop and employ provide a potentially useful

7. Variable Selection for Regression Models of Percentile Flows

Science.gov (United States)

2017-12-01

Percentile flows describe the flow magnitude equaled or exceeded for a given percent of time, and are widely used in water resource management. However, these statistics are normally unavailable since most basins are ungauged. Percentile flows of ungauged basins are often predicted using regression models based on readily observable basin characteristics, such as mean elevation. The number of these independent variables is too large to evaluate all possible models. A subset of models is typically evaluated using automatic procedures, like stepwise regression. This ignores a large variety of methods from the field of feature (variable) selection and physical understanding of percentile flows. A study of 918 basins in the United States was conducted to compare an automatic regression procedure to the following variable selection methods: (1) principal component analysis, (2) correlation analysis, (3) random forests, (4) genetic programming, (5) Bayesian networks, and (6) physical understanding. The automatic regression procedure only performed better than principal component analysis. Poor performance of the regression procedure was due to a commonly used filter for multicollinearity, which rejected the strongest models because they had cross-correlated independent variables. Multicollinearity did not decrease model performance in validation because of a representative set of calibration basins. Variable selection methods based strictly on predictive power (numbers 2-5 from above) performed similarly, likely indicating a limit to the predictive power of the variables. Similar performance was also reached using variables selected based on physical understanding, a finding that substantiates recent calls to emphasize physical understanding in modeling for predictions in ungauged basins. The strongest variables highlighted the importance of geology and land cover, whereas widely used topographic variables were the weakest predictors. Variables suffered from a high

8. Variability aware compact model characterization for statistical circuit design optimization

Science.gov (United States)

Qiao, Ying; Qian, Kun; Spanos, Costas J.

2012-03-01

Variability modeling at the compact transistor model level can enable statistically optimized designs in view of limitations imposed by the fabrication technology. In this work we propose an efficient variabilityaware compact model characterization methodology based on the linear propagation of variance. Hierarchical spatial variability patterns of selected compact model parameters are directly calculated from transistor array test structures. This methodology has been implemented and tested using transistor I-V measurements and the EKV-EPFL compact model. Calculation results compare well to full-wafer direct model parameter extractions. Further studies are done on the proper selection of both compact model parameters and electrical measurement metrics used in the method.

9. Variable amplitude fatigue, modelling and testing

International Nuclear Information System (INIS)

Svensson, Thomas.

1993-01-01

Problems related to metal fatigue modelling and testing are here treated in four different papers. In the first paper different views of the subject are summarised in a literature survey. In the second paper a new model for fatigue life is investigated. Experimental results are established which are promising for further development of the mode. In the third paper a method is presented that generates a stochastic process, suitable to fatigue testing. The process is designed in order to resemble certain fatigue related features in service life processes. In the fourth paper fatigue problems in transport vibrations are treated

10. Assessing the accuracy and stability of variable selection methods for random forest modeling in ecology.

Science.gov (United States)

Fox, Eric W; Hill, Ryan A; Leibowitz, Scott G; Olsen, Anthony R; Thornbrugh, Darren J; Weber, Marc H

2017-07-01

Random forest (RF) modeling has emerged as an important statistical learning method in ecology due to its exceptional predictive performance. However, for large and complex ecological data sets, there is limited guidance on variable selection methods for RF modeling. Typically, either a preselected set of predictor variables are used or stepwise procedures are employed which iteratively remove variables according to their importance measures. This paper investigates the application of variable selection methods to RF models for predicting probable biological stream condition. Our motivating data set consists of the good/poor condition of n = 1365 stream survey sites from the 2008/2009 National Rivers and Stream Assessment, and a large set (p = 212) of landscape features from the StreamCat data set as potential predictors. We compare two types of RF models: a full variable set model with all 212 predictors and a reduced variable set model selected using a backward elimination approach. We assess model accuracy using RF's internal out-of-bag estimate, and a cross-validation procedure with validation folds external to the variable selection process. We also assess the stability of the spatial predictions generated by the RF models to changes in the number of predictors and argue that model selection needs to consider both accuracy and stability. The results suggest that RF modeling is robust to the inclusion of many variables of moderate to low importance. We found no substantial improvement in cross-validated accuracy as a result of variable reduction. Moreover, the backward elimination procedure tended to select too few variables and exhibited numerous issues such as upwardly biased out-of-bag accuracy estimates and instabilities in the spatial predictions. We use simulations to further support and generalize results from the analysis of real data. A main purpose of this work is to elucidate issues of model selection bias and instability to ecologists interested in

11. modelling relationship between rainfall variability and yields

African Journals Online (AJOL)

, S. and ... factors to rice yield. Adebayo and Adebayo (1997) developed double log multiple regression model to predict rice yield in Adamawa State, Nigeria. The general form of .... the second are the crop yield/values for millet and sorghum ...

12. A SIMULATION MODEL OF THE GAS COMPLEX

Directory of Open Access Journals (Sweden)

Sokolova G. E.

2016-06-01

Full Text Available The article considers the dynamics of gas production in Russia, the structure of sales in the different market segments, as well as comparative dynamics of selling prices on these segments. Problems of approach to the creation of the gas complex using a simulation model, allowing to estimate efficiency of the project and determine the stability region of the obtained solutions. In the presented model takes into account the unit repayment of the loan, allowing with the first year of simulation to determine the possibility of repayment of the loan. The model object is a group of gas fields, which is determined by the minimum flow rate above which the project is cost-effective. In determining the minimum source flow rate for the norm of discount is taken as a generalized weighted average percentage on debt and equity taking into account risk premiums. He also serves as the lower barrier to internal rate of return below which the project is rejected as ineffective. Analysis of the dynamics and methods of expert evaluation allow to determine the intervals of variation of the simulated parameters, such as the price of gas and the exit gas complex at projected capacity. Calculated using the Monte Carlo method, for each random realization of the model simulated values of parameters allow to obtain a set of optimal for each realization of values minimum yield of wells, and also allows to determine the stability region of the solution.

13. Structured analysis and modeling of complex systems

Science.gov (United States)

Strome, David R.; Dalrymple, Mathieu A.

1992-01-01

The Aircrew Evaluation Sustained Operations Performance (AESOP) facility at Brooks AFB, Texas, combines the realism of an operational environment with the control of a research laboratory. In recent studies we collected extensive data from the Airborne Warning and Control Systems (AWACS) Weapons Directors subjected to high and low workload Defensive Counter Air Scenarios. A critical and complex task in this environment involves committing a friendly fighter against a hostile fighter. Structured Analysis and Design techniques and computer modeling systems were applied to this task as tools for analyzing subject performance and workload. This technology is being transferred to the Man-Systems Division of NASA Johnson Space Center for application to complex mission related tasks, such as manipulating the Shuttle grappler arm.

14. Glass Durability Modeling, Activated Complex Theory (ACT)

International Nuclear Information System (INIS)

CAROL, JANTZEN

2005-01-01

The most important requirement for high-level waste glass acceptance for disposal in a geological repository is the chemical durability, expressed as a glass dissolution rate. During the early stages of glass dissolution in near static conditions that represent a repository disposal environment, a gel layer resembling a membrane forms on the glass surface through which ions exchange between the glass and the leachant. The hydrated gel layer exhibits acid/base properties which are manifested as the pH dependence of the thickness and nature of the gel layer. The gel layer has been found to age into either clay mineral assemblages or zeolite mineral assemblages. The formation of one phase preferentially over the other has been experimentally related to changes in the pH of the leachant and related to the relative amounts of Al +3 and Fe +3 in a glass. The formation of clay mineral assemblages on the leached glass surface layers ,lower pH and Fe +3 rich glasses, causes the dissolution rate to slow to a long-term steady state rate. The formation of zeolite mineral assemblages ,higher pH and Al +3 rich glasses, on leached glass surface layers causes the dissolution rate to increase and return to the initial high forward rate. The return to the forward dissolution rate is undesirable for long-term performance of glass in a disposal environment. An investigation into the role of glass stoichiometry, in terms of the quasi-crystalline mineral species in a glass, has shown that the chemistry and structure in the parent glass appear to control the activated surface complexes that form in the leached layers, and these mineral complexes ,some Fe +3 rich and some Al +3 rich, play a role in whether or not clays or zeolites are the dominant species formed on the leached glass surface. The chemistry and structure, in terms of Q distributions of the parent glass, are well represented by the atomic ratios of the glass forming components. Thus, glass dissolution modeling using simple

15. Complex variable boundary elements for fluid flow; Robni elementi kompleksne spremenljivke za pretok fluidov

Energy Technology Data Exchange (ETDEWEB)

Bizjak, D; Alujevic, A [Institut ' Jozef Stefan' , Ljubljana (Yugoslavia)

1988-07-01

The Complex Variable Boundary Element Method is a numerical method for solving two-dimensional problems of Laplace or Poisson type. It is based on the theory of analytic functions. This paper resumes the basic facts about the method. Application of the method to the stationary incompressible irrotational flow is carried out after that. At the end, a sample problem of flow through an abrupt area change channel is shown. (author)

16. Fractional derivatives of constant and variable orders applied to anomalous relaxation models in heat transfer problems

Directory of Open Access Journals (Sweden)

Yang Xiao-Jun

2017-01-01

Full Text Available In this paper, we address a class of the fractional derivatives of constant and variable orders for the first time. Fractional-order relaxation equations of constants and variable orders in the sense of Caputo type are modeled from mathematical view of point. The comparative results of the anomalous relaxation among the various fractional derivatives are also given. They are very efficient in description of the complex phenomenon arising in heat transfer.

17. Chaos from simple models to complex systems

CERN Document Server

Cencini, Massimo; Vulpiani, Angelo

2010-01-01

Chaos: from simple models to complex systems aims to guide science and engineering students through chaos and nonlinear dynamics from classical examples to the most recent fields of research. The first part, intended for undergraduate and graduate students, is a gentle and self-contained introduction to the concepts and main tools for the characterization of deterministic chaotic systems, with emphasis to statistical approaches. The second part can be used as a reference by researchers as it focuses on more advanced topics including the characterization of chaos with tools of information theor

18. A geometric model for magnetizable bodies with internal variables

Directory of Open Access Journals (Sweden)

Restuccia, L

2005-11-01

Full Text Available In a geometrical framework for thermo-elasticity of continua with internal variables we consider a model of magnetizable media previously discussed and investigated by Maugin. We assume as state variables the magnetization together with its space gradient, subjected to evolution equations depending on both internal and external magnetic fields. We calculate the entropy function and necessary conditions for its existence.

19. Variable-Structure Control of a Model Glider Airplane

Science.gov (United States)

Waszak, Martin R.; Anderson, Mark R.

2008-01-01

A variable-structure control system designed to enable a fuselage-heavy airplane to recover from spin has been demonstrated in a hand-launched, instrumented model glider airplane. Variable-structure control is a high-speed switching feedback control technique that has been developed for control of nonlinear dynamic systems.

20. Modelling carbon and nitrogen turnover in variably saturated soils

Science.gov (United States)

Batlle-Aguilar, J.; Brovelli, A.; Porporato, A.; Barry, D. A.

2009-04-01

Natural ecosystems provide services such as ameliorating the impacts of deleterious human activities on both surface and groundwater. For example, several studies have shown that a healthy riparian ecosystem can reduce the nutrient loading of agricultural wastewater, thus protecting the receiving surface water body. As a result, in order to develop better protection strategies and/or restore natural conditions, there is a growing interest in understanding ecosystem functioning, including feedbacks and nonlinearities. Biogeochemical transformations in soils are heavily influenced by microbial decomposition of soil organic matter. Carbon and nutrient cycles are in turn strongly sensitive to environmental conditions, and primarily to soil moisture and temperature. These two physical variables affect the reaction rates of almost all soil biogeochemical transformations, including microbial and fungal activity, nutrient uptake and release from plants, etc. Soil water saturation and temperature are not constants, but vary both in space and time, thus further complicating the picture. In order to interpret field experiments and elucidate the different mechanisms taking place, numerical tools are beneficial. In this work we developed a 3D numerical reactive-transport model as an aid in the investigation the complex physical, chemical and biological interactions occurring in soils. The new code couples the USGS models (MODFLOW 2000-VSF, MT3DMS and PHREEQC) using an operator-splitting algorithm, and is a further development an existing reactive/density-dependent flow model PHWAT. The model was tested using simplified test cases. Following verification, a process-based biogeochemical reaction network describing the turnover of carbon and nitrogen in soils was implemented. Using this tool, we investigated the coupled effect of moisture content and temperature fluctuations on nitrogen and organic matter cycling in the riparian zone, in order to help understand the relative

1. Thermodynamics of U(VI) complexation by succinate at variable temperatures

Energy Technology Data Exchange (ETDEWEB)

Rawat, Neetika [Radiochemistry Division, Bhabha Atomic Research Centre, Mumbai 400 085 (India); Tomar, B.S., E-mail: bstomar@barc.gov.in [Radiochemistry Division, Bhabha Atomic Research Centre, Mumbai 400 085 (India); Manchanda, V.K. [Radiochemistry Division, Bhabha Atomic Research Centre, Mumbai 400 085 (India)

2011-07-15

Research highlights: > lg {beta} and {Delta}H{sub C} for U(VI)-succinate determined at variable temperatures. > Increase in lg {beta} with temperature well explained by Born equation. > {Delta}S{sub C} plays the dominant role in variation of {Delta}G{sub C} with temperature. > {Delta}H{sub C} for U(VI)-succinate increases linearly with temperature. > {Delta}C{sub P} of U(VI)-succinate is higher than that of oxalate and malonate complexes. - Abstract: Complexation of U(VI) by succinate has been studied at various temperatures in the range of (298 to 338) K by potentiometry and isothermal titration calorimetry at constant ionic strength (1.0 M). The potentiometric titrations revealed the formation of 1:1 uranyl succinate complex in the pH range of 1.5 to 4.5. The stability constant of uranyl succinate complex was found to increase with temperature. Similar trend was observed in the case of enthalpy of complex formation. However, the increase in entropy with temperature over-compensated the increase in enthalpy, thereby favouring the complexation reaction at higher temperatures. The linear increase of enthalpy of complexation with temperature indicates constancy of the change in heat capacity during complexation. The temperature dependence of stability constant data was well explained with the help of Born equation for electrostatic interaction between the metal ion and the ligand. The data have been compared with those for uranyl complexes with malonate and oxalate to study the effect of ligand size and hydrophobicity on the temperature dependence of thermodynamic quantities.

2. Synthesis and characterization of variable-architecture thermosensitive polymers for complexation with DNA.

Science.gov (United States)

Pennadam, Sivanand S; Ellis, James S; Lavigne, Matthieu D; Górecki, Dariusz C; Davies, Martyn C; Alexander, Cameron

2007-01-02

Copolymers of N-isopropylacrylamide with a fluorescent probe monomer were grafted to branched poly(ethyleneimine) to generate polycations that exhibited lower critical solution temperature (LCST) behavior. The structures of these polymers were confirmed by spectroscopy, and their phase transitions before and after complexation with DNA were followed using ultraviolet and fluorescence spectroscopy and light scattering. Interactions with DNA were investigated by ethidium bromide displacement assays, while temperature-induced changes in structure of both polymers and polymer-DNA complexes were evaluated by fluorescence spectroscopy, dynamic light scattering, laser Doppler anemometry, and atomic force microscopy (AFM) in water and buffer solutions. The results showed that changes in polymer architecture were mirrored by variations in the architectures of the complexes and that the overall effect of the temperature-mediated changes was dependent on the graft polymer architecture and content, as well as the solvent medium, concentrations, and stoichiometries of the complexes. Furthermore, AFM indicated subtle changes in polymer-DNA complexes at the microstructural level that could not be detected by light scattering techniques. Uniquely, variable-temperature aqueous-phase AFM was able to show that changes in the structures of these complexes were not uniform across a population of polymer-DNA condensates, with isolated complexes compacting above LCST even though the sample as a whole showed a tendency for aggregation of complexes above LCST over time. These results indicate that sample heterogeneities can be accentuated in responsive polymer--DNA complexes through LCST-mediated changes--a factor that is likely to be important in cellular uptake and nucleic acid transport.

3. Verification of models for ballistic movement time and endpoint variability.

Science.gov (United States)

Lin, Ray F; Drury, Colin G

2013-01-01

A hand control movement is composed of several ballistic movements. The time required in performing a ballistic movement and its endpoint variability are two important properties in developing movement models. The purpose of this study was to test potential models for predicting these two properties. Twelve participants conducted ballistic movements of specific amplitudes using a drawing tablet. The measured data of movement time and endpoint variability were then used to verify the models. This study was successful with Hoffmann and Gan's movement time model (Hoffmann, 1981; Gan and Hoffmann 1988) predicting more than 90.7% data variance for 84 individual measurements. A new theoretically developed ballistic movement variability model, proved to be better than Howarth, Beggs, and Bowden's (1971) model, predicting on average 84.8% of stopping-variable error and 88.3% of aiming-variable errors. These two validated models will help build solid theoretical movement models and evaluate input devices. This article provides better models for predicting end accuracy and movement time of ballistic movements that are desirable in rapid aiming tasks, such as keying in numbers on a smart phone. The models allow better design of aiming tasks, for example button sizes on mobile phones for different user populations.

4. Interdecadal variability in a global coupled model

International Nuclear Information System (INIS)

Storch, J.S. von.

1994-01-01

Interdecadal variations are studied in a 325-year simulation performed by a coupled atmosphere - ocean general circulation model. The patterns obtained in this study may be considered as characteristic patterns for interdecadal variations. 1. The atmosphere: Interdecadal variations have no preferred time scales, but reveal well-organized spatial structures. They appear as two modes, one is related with variations of the tropical easterlies and the other with the Southern Hemisphere westerlies. Both have red spectra. The amplitude of the associated wind anomalies is largest in the upper troposphere. The associated temperature anomalies are in thermal-wind balance with the zonal winds and are out-of-phase between the troposphere and the lower stratosphere. 2. The Pacific Ocean: The dominant mode in the Pacific appears to be wind-driven in the midlatitudes and is related to air-sea interaction processes during one stage of the oscillation in the tropics. Anomalies of this mode propagate westward in the tropics and the northward (southwestward) in the North (South) Pacific on a time scale of about 10 to 20 years. (orig.)

5. Complex singlet extension of the standard model

International Nuclear Information System (INIS)

Barger, Vernon; McCaskey, Mathew; Langacker, Paul; Ramsey-Musolf, Michael; Shaughnessy, Gabe

2009-01-01

We analyze a simple extension of the standard model (SM) obtained by adding a complex singlet to the scalar sector (cxSM). We show that the cxSM can contain one or two viable cold dark matter candidates and analyze the conditions on the parameters of the scalar potential that yield the observed relic density. When the cxSM potential contains a global U(1) symmetry that is both softly and spontaneously broken, it contains both a viable dark matter candidate and the ingredients necessary for a strong first order electroweak phase transition as needed for electroweak baryogenesis. We also study the implications of the model for discovery of a Higgs boson at the Large Hadron Collider.

6. Extension of association models to complex chemicals

DEFF Research Database (Denmark)

Avlund, Ane Søgaard

Summary of “Extension of association models to complex chemicals”. Ph.D. thesis by Ane Søgaard Avlund The subject of this thesis is application of SAFT type equations of state (EoS). Accurate and predictive thermodynamic models are important in many industries including the petroleum industry......; CPA and sPC-SAFT. Phase equilibrium and monomer fraction calculations with sPC-SAFT for methanol are used in the thesis to illustrate the importance of parameter estimation when using SAFT. Different parameter sets give similar pure component vapor pressure and liquid density results, whereas very...... association is presented in the thesis, and compared to the corresponding lattice theory. The theory for intramolecular association is then applied in connection with sPC-SAFT for mixtures containing glycol ethers. Calculations with sPC-SAFT (without intramolecular association) are presented for comparison...

7. Using structural equation modeling to investigate relationships among ecological variables

Science.gov (United States)

Malaeb, Z.A.; Kevin, Summers J.; Pugesek, B.H.

2000-01-01

Structural equation modeling is an advanced multivariate statistical process with which a researcher can construct theoretical concepts, test their measurement reliability, hypothesize and test a theory about their relationships, take into account measurement errors, and consider both direct and indirect effects of variables on one another. Latent variables are theoretical concepts that unite phenomena under a single term, e.g., ecosystem health, environmental condition, and pollution (Bollen, 1989). Latent variables are not measured directly but can be expressed in terms of one or more directly measurable variables called indicators. For some researchers, defining, constructing, and examining the validity of latent variables may be the end task of itself. For others, testing hypothesized relationships of latent variables may be of interest. We analyzed the correlation matrix of eleven environmental variables from the U.S. Environmental Protection Agency's (USEPA) Environmental Monitoring and Assessment Program for Estuaries (EMAP-E) using methods of structural equation modeling. We hypothesized and tested a conceptual model to characterize the interdependencies between four latent variables-sediment contamination, natural variability, biodiversity, and growth potential. In particular, we were interested in measuring the direct, indirect, and total effects of sediment contamination and natural variability on biodiversity and growth potential. The model fit the data well and accounted for 81% of the variability in biodiversity and 69% of the variability in growth potential. It revealed a positive total effect of natural variability on growth potential that otherwise would have been judged negative had we not considered indirect effects. That is, natural variability had a negative direct effect on growth potential of magnitude -0.3251 and a positive indirect effect mediated through biodiversity of magnitude 0.4509, yielding a net positive total effect of 0

8. Multiple Imputation of Predictor Variables Using Generalized Additive Models

NARCIS (Netherlands)

de Jong, Roel; van Buuren, Stef; Spiess, Martin

2016-01-01

The sensitivity of multiple imputation methods to deviations from their distributional assumptions is investigated using simulations, where the parameters of scientific interest are the coefficients of a linear regression model, and values in predictor variables are missing at random. The

9. Parametric Linear Hybrid Automata for Complex Environmental Systems Modeling

Directory of Open Access Journals (Sweden)

Samar Hayat Khan Tareen

2015-07-01

Full Text Available Environmental systems, whether they be weather patterns or predator-prey relationships, are dependent on a number of different variables, each directly or indirectly affecting the system at large. Since not all of these factors are known, these systems take on non-linear dynamics, making it difficult to accurately predict meaningful behavioral trends far into the future. However, such dynamics do not warrant complete ignorance of different efforts to understand and model close approximations of these systems. Towards this end, we have applied a logical modeling approach to model and analyze the behavioral trends and systematic trajectories that these systems exhibit without delving into their quantification. This approach, formalized by René Thomas for discrete logical modeling of Biological Regulatory Networks (BRNs and further extended in our previous studies as parametric biological linear hybrid automata (Bio-LHA, has been previously employed for the analyses of different molecular regulatory interactions occurring across various cells and microbial species. As relationships between different interacting components of a system can be simplified as positive or negative influences, we can employ the Bio-LHA framework to represent different components of the environmental system as positive or negative feedbacks. In the present study, we highlight the benefits of hybrid (discrete/continuous modeling which lead to refinements among the fore-casted behaviors in order to find out which ones are actually possible. We have taken two case studies: an interaction of three microbial species in a freshwater pond, and a more complex atmospheric system, to show the applications of the Bio-LHA methodology for the timed hybrid modeling of environmental systems. Results show that the approach using the Bio-LHA is a viable method for behavioral modeling of complex environmental systems by finding timing constraints while keeping the complexity of the model

10. Higher-dimensional cosmological model with variable gravitational ...

We have studied five-dimensional homogeneous cosmological models with variable and bulk viscosity in Lyra geometry. Exact solutions for the field equations have been obtained and physical properties of the models are discussed. It has been observed that the results of new models are well within the observational ...

11. Soil Temperature Variability in Complex Terrain measured using Distributed a Fiber-Optic Distributed Temperature Sensing

Science.gov (United States)

Seyfried, M. S.; Link, T. E.

2013-12-01

Soil temperature (Ts) exerts critical environmental controls on hydrologic and biogeochemical processes. Rates of carbon cycling, mineral weathering, infiltration and snow melt are all influenced by Ts. Although broadly reflective of the climate, Ts is sensitive to local variations in cover (vegetative, litter, snow), topography (slope, aspect, position), and soil properties (texture, water content), resulting in a spatially and temporally complex distribution of Ts across the landscape. Understanding and quantifying the processes controlled by Ts requires an understanding of that distribution. Relatively few spatially distributed field Ts data exist, partly because traditional Ts data are point measurements. A relatively new technology, fiber optic distributed temperature system (FO-DTS), has the potential to provide such data but has not been rigorously evaluated in the context of remote, long term field research. We installed FO-DTS in a small experimental watershed in the Reynolds Creek Experimental Watershed (RCEW) in the Owyhee Mountains of SW Idaho. The watershed is characterized by complex terrain and a seasonal snow cover. Our objectives are to: (i) evaluate the applicability of fiber optic DTS to remote field environments and (ii) to describe the spatial and temporal variability of soil temperature in complex terrain influenced by a variable snow cover. We installed fiber optic cable at a depth of 10 cm in contrasting snow accumulation and topographic environments and monitored temperature along 750 m with DTS. We found that the DTS can provide accurate Ts data (+/- .4°C) that resolves Ts changes of about 0.03°C at a spatial scale of 1 m with occasional calibration under conditions with an ambient temperature range of 50°C. We note that there are site-specific limitations related cable installation and destruction by local fauna. The FO-DTS provide unique insight into the spatial and temporal variability of Ts in a landscape. We found strong seasonal

12. A variable-order fractal derivative model for anomalous diffusion

Directory of Open Access Journals (Sweden)

Liu Xiaoting

2017-01-01

Full Text Available This paper pays attention to develop a variable-order fractal derivative model for anomalous diffusion. Previous investigations have indicated that the medium structure, fractal dimension or porosity may change with time or space during solute transport processes, results in time or spatial dependent anomalous diffusion phenomena. Hereby, this study makes an attempt to introduce a variable-order fractal derivative diffusion model, in which the index of fractal derivative depends on temporal moment or spatial position, to characterize the above mentioned anomalous diffusion (or transport processes. Compared with other models, the main advantages in description and the physical explanation of new model are explored by numerical simulation. Further discussions on the dissimilitude such as computational efficiency, diffusion behavior and heavy tail phenomena of the new model and variable-order fractional derivative model are also offered.

13. A Non-Gaussian Spatial Generalized Linear Latent Variable Model

KAUST Repository

Irincheeva, Irina

2012-08-03

We consider a spatial generalized linear latent variable model with and without normality distributional assumption on the latent variables. When the latent variables are assumed to be multivariate normal, we apply a Laplace approximation. To relax the assumption of marginal normality in favor of a mixture of normals, we construct a multivariate density with Gaussian spatial dependence and given multivariate margins. We use the pairwise likelihood to estimate the corresponding spatial generalized linear latent variable model. The properties of the resulting estimators are explored by simulations. In the analysis of an air pollution data set the proposed methodology uncovers weather conditions to be a more important source of variability than air pollution in explaining all the causes of non-accidental mortality excluding accidents. © 2012 International Biometric Society.

14. A Non-Gaussian Spatial Generalized Linear Latent Variable Model

KAUST Repository

Irincheeva, Irina; Cantoni, Eva; Genton, Marc G.

2012-01-01

We consider a spatial generalized linear latent variable model with and without normality distributional assumption on the latent variables. When the latent variables are assumed to be multivariate normal, we apply a Laplace approximation. To relax the assumption of marginal normality in favor of a mixture of normals, we construct a multivariate density with Gaussian spatial dependence and given multivariate margins. We use the pairwise likelihood to estimate the corresponding spatial generalized linear latent variable model. The properties of the resulting estimators are explored by simulations. In the analysis of an air pollution data set the proposed methodology uncovers weather conditions to be a more important source of variability than air pollution in explaining all the causes of non-accidental mortality excluding accidents. © 2012 International Biometric Society.

15. The Model of Complex Structure of Quark

Science.gov (United States)

Liu, Rongwu

2017-09-01

In Quantum Chromodynamics, quark is known as a kind of point-like fundamental particle which carries mass, charge, color, and flavor, strong interaction takes place between quarks by means of exchanging intermediate particles-gluons. An important consequence of this theory is that, strong interaction is a kind of short-range force, and it has the features of asymptotic freedom'' and quark confinement''. In order to reveal the nature of strong interaction, the bag'' model of vacuum and the string'' model of string theory were proposed in the context of quantum mechanics, but neither of them can provide a clear interaction mechanism. This article formulates a new mechanism by proposing a model of complex structure of quark, it can be outlined as follows: (1) Quark (as well as electron, etc) is a kind of complex structure, it is composed of fundamental particle (fundamental matter mass and electricity) and fundamental volume field (fundamental matter flavor and color) which exists in the form of limited volume; fundamental particle lies in the center of fundamental volume field, forms the nucleus'' of quark. (2) As static electric force, the color field force between quarks has classical form, it is proportional to the square of the color quantity carried by each color field, and inversely proportional to the area of cross section of overlapping color fields which is along force direction, it has the properties of overlap, saturation, non-central, and constant. (3) Any volume field undergoes deformation when interacting with other volume field, the deformation force follows Hooke's law. (4) The phenomena of asymptotic freedom'' and quark confinement'' are the result of color field force and deformation force.

16. VAM2D: Variably saturated analysis model in two dimensions

International Nuclear Information System (INIS)

Huyakorn, P.S.; Kool, J.B.; Wu, Y.S.

1991-10-01

This report documents a two-dimensional finite element model, VAM2D, developed to simulate water flow and solute transport in variably saturated porous media. Both flow and transport simulation can be handled concurrently or sequentially. The formulation of the governing equations and the numerical procedures used in the code are presented. The flow equation is approximated using the Galerkin finite element method. Nonlinear soil moisture characteristics and atmospheric boundary conditions (e.g., infiltration, evaporation and seepage face), are treated using Picard and Newton-Raphson iterations. Hysteresis effects and anisotropy in the unsaturated hydraulic conductivity can be taken into account if needed. The contaminant transport simulation can account for advection, hydrodynamic dispersion, linear equilibrium sorption, and first-order degradation. Transport of a single component or a multi-component decay chain can be handled. The transport equation is approximated using an upstream weighted residual method. Several test problems are presented to verify the code and demonstrate its utility. These problems range from simple one-dimensional to complex two-dimensional and axisymmetric problems. This document has been produced as a user's manual. It contains detailed information on the code structure along with instructions for input data preparation and sample input and printed output for selected test problems. Also included are instructions for job set up and restarting procedures. 44 refs., 54 figs., 24 tabs

17. Representing general theoretical concepts in structural equation models: The role of composite variables

Science.gov (United States)

Grace, J.B.; Bollen, K.A.

2008-01-01

Structural equation modeling (SEM) holds the promise of providing natural scientists the capacity to evaluate complex multivariate hypotheses about ecological systems. Building on its predecessors, path analysis and factor analysis, SEM allows for the incorporation of both observed and unobserved (latent) variables into theoretically-based probabilistic models. In this paper we discuss the interface between theory and data in SEM and the use of an additional variable type, the composite. In simple terms, composite variables specify the influences of collections of other variables and can be helpful in modeling heterogeneous concepts of the sort commonly of interest to ecologists. While long recognized as a potentially important element of SEM, composite variables have received very limited use, in part because of a lack of theoretical consideration, but also because of difficulties that arise in parameter estimation when using conventional solution procedures. In this paper we present a framework for discussing composites and demonstrate how the use of partially-reduced-form models can help to overcome some of the parameter estimation and evaluation problems associated with models containing composites. Diagnostic procedures for evaluating the most appropriate and effective use of composites are illustrated with an example from the ecological literature. It is argued that an ability to incorporate composite variables into structural equation models may be particularly valuable in the study of natural systems, where concepts are frequently multifaceted and the influence of suites of variables are often of interest. ?? Springer Science+Business Media, LLC 2007.

18. Preliminary Multi-Variable Cost Model for Space Telescopes

Science.gov (United States)

Stahl, H. Philip; Hendrichs, Todd

2010-01-01

Parametric cost models are routinely used to plan missions, compare concepts and justify technology investments. This paper reviews the methodology used to develop space telescope cost models; summarizes recently published single variable models; and presents preliminary results for two and three variable cost models. Some of the findings are that increasing mass reduces cost; it costs less per square meter of collecting aperture to build a large telescope than a small telescope; and technology development as a function of time reduces cost at the rate of 50% per 17 years.

19. Bayesian approach to errors-in-variables in regression models

Science.gov (United States)

2017-05-01

In many applications and experiments, data sets are often contaminated with error or mismeasured covariates. When at least one of the covariates in a model is measured with error, Errors-in-Variables (EIV) model can be used. Measurement error, when not corrected, would cause misleading statistical inferences and analysis. Therefore, our goal is to examine the relationship of the outcome variable and the unobserved exposure variable given the observed mismeasured surrogate by applying the Bayesian formulation to the EIV model. We shall extend the flexible parametric method proposed by Hossain and Gustafson (2009) to another nonlinear regression model which is the Poisson regression model. We shall then illustrate the application of this approach via a simulation study using Markov chain Monte Carlo sampling methods.

20. A model for AGN variability on multiple time-scales

Science.gov (United States)

Sartori, Lia F.; Schawinski, Kevin; Trakhtenbrot, Benny; Caplar, Neven; Treister, Ezequiel; Koss, Michael J.; Urry, C. Megan; Zhang, C. E.

2018-05-01

We present a framework to link and describe active galactic nuclei (AGN) variability on a wide range of time-scales, from days to billions of years. In particular, we concentrate on the AGN variability features related to changes in black hole fuelling and accretion rate. In our framework, the variability features observed in different AGN at different time-scales may be explained as realisations of the same underlying statistical properties. In this context, we propose a model to simulate the evolution of AGN light curves with time based on the probability density function (PDF) and power spectral density (PSD) of the Eddington ratio (L/LEdd) distribution. Motivated by general galaxy population properties, we propose that the PDF may be inspired by the L/LEdd distribution function (ERDF), and that a single (or limited number of) ERDF+PSD set may explain all observed variability features. After outlining the framework and the model, we compile a set of variability measurements in terms of structure function (SF) and magnitude difference. We then combine the variability measurements on a SF plot ranging from days to Gyr. The proposed framework enables constraints on the underlying PSD and the ability to link AGN variability on different time-scales, therefore providing new insights into AGN variability and black hole growth phenomena.

1. Does model performance improve with complexity? A case study with three hydrological models

Science.gov (United States)

Orth, Rene; Staudinger, Maria; Seneviratne, Sonia I.; Seibert, Jan; Zappa, Massimiliano

2015-04-01

In recent decades considerable progress has been made in climate model development. Following the massive increase in computational power, models became more sophisticated. At the same time also simple conceptual models have advanced. In this study we validate and compare three hydrological models of different complexity to investigate whether their performance varies accordingly. For this purpose we use runoff and also soil moisture measurements, which allow a truly independent validation, from several sites across Switzerland. The models are calibrated in similar ways with the same runoff data. Our results show that the more complex models HBV and PREVAH outperform the simple water balance model (SWBM) in case of runoff but not for soil moisture. Furthermore the most sophisticated PREVAH model shows an added value compared to the HBV model only in case of soil moisture. Focusing on extreme events we find generally improved performance of the SWBM during drought conditions and degraded agreement with observations during wet extremes. For the more complex models we find the opposite behavior, probably because they were primarily developed for prediction of runoff extremes. As expected given their complexity, HBV and PREVAH have more problems with over-fitting. All models show a tendency towards better performance in lower altitudes as opposed to (pre-) alpine sites. The results vary considerably across the investigated sites. In contrast, the different metrics we consider to estimate the agreement between models and observations lead to similar conclusions, indicating that the performance of the considered models is similar at different time scales as well as for anomalies and long-term means. We conclude that added complexity does not necessarily lead to improved performance of hydrological models, and that performance can vary greatly depending on the considered hydrological variable (e.g. runoff vs. soil moisture) or hydrological conditions (floods vs. droughts).

2. Clinical Complexity in Medicine: A Measurement Model of Task and Patient Complexity.

Science.gov (United States)

Islam, R; Weir, C; Del Fiol, G

2016-01-01

Complexity in medicine needs to be reduced to simple components in a way that is comprehensible to researchers and clinicians. Few studies in the current literature propose a measurement model that addresses both task and patient complexity in medicine. The objective of this paper is to develop an integrated approach to understand and measure clinical complexity by incorporating both task and patient complexity components focusing on the infectious disease domain. The measurement model was adapted and modified for the healthcare domain. Three clinical infectious disease teams were observed, audio-recorded and transcribed. Each team included an infectious diseases expert, one infectious diseases fellow, one physician assistant and one pharmacy resident fellow. The transcripts were parsed and the authors independently coded complexity attributes. This baseline measurement model of clinical complexity was modified in an initial set of coding processes and further validated in a consensus-based iterative process that included several meetings and email discussions by three clinical experts from diverse backgrounds from the Department of Biomedical Informatics at the University of Utah. Inter-rater reliability was calculated using Cohen's kappa. The proposed clinical complexity model consists of two separate components. The first is a clinical task complexity model with 13 clinical complexity-contributing factors and 7 dimensions. The second is the patient complexity model with 11 complexity-contributing factors and 5 dimensions. The measurement model for complexity encompassing both task and patient complexity will be a valuable resource for future researchers and industry to measure and understand complexity in healthcare.

3. Reducing Spatial Data Complexity for Classification Models

International Nuclear Information System (INIS)

Ruta, Dymitr; Gabrys, Bogdan

2007-01-01

Intelligent data analytics gradually becomes a day-to-day reality of today's businesses. However, despite rapidly increasing storage and computational power current state-of-the-art predictive models still can not handle massive and noisy corporate data warehouses. What is more adaptive and real-time operational environment requires multiple models to be frequently retrained which further hinders their use. Various data reduction techniques ranging from data sampling up to density retention models attempt to address this challenge by capturing a summarised data structure, yet they either do not account for labelled data or degrade the classification performance of the model trained on the condensed dataset. Our response is a proposition of a new general framework for reducing the complexity of labelled data by means of controlled spatial redistribution of class densities in the input space. On the example of Parzen Labelled Data Compressor (PLDC) we demonstrate a simulatory data condensation process directly inspired by the electrostatic field interaction where the data are moved and merged following the attracting and repelling interactions with the other labelled data. The process is controlled by the class density function built on the original data that acts as a class-sensitive potential field ensuring preservation of the original class density distributions, yet allowing data to rearrange and merge joining together their soft class partitions. As a result we achieved a model that reduces the labelled datasets much further than any competitive approaches yet with the maximum retention of the original class densities and hence the classification performance. PLDC leaves the reduced dataset with the soft accumulative class weights allowing for efficient online updates and as shown in a series of experiments if coupled with Parzen Density Classifier (PDC) significantly outperforms competitive data condensation methods in terms of classification performance at the

4. Reducing Spatial Data Complexity for Classification Models

Science.gov (United States)

Ruta, Dymitr; Gabrys, Bogdan

2007-11-01

Intelligent data analytics gradually becomes a day-to-day reality of today's businesses. However, despite rapidly increasing storage and computational power current state-of-the-art predictive models still can not handle massive and noisy corporate data warehouses. What is more adaptive and real-time operational environment requires multiple models to be frequently retrained which further hinders their use. Various data reduction techniques ranging from data sampling up to density retention models attempt to address this challenge by capturing a summarised data structure, yet they either do not account for labelled data or degrade the classification performance of the model trained on the condensed dataset. Our response is a proposition of a new general framework for reducing the complexity of labelled data by means of controlled spatial redistribution of class densities in the input space. On the example of Parzen Labelled Data Compressor (PLDC) we demonstrate a simulatory data condensation process directly inspired by the electrostatic field interaction where the data are moved and merged following the attracting and repelling interactions with the other labelled data. The process is controlled by the class density function built on the original data that acts as a class-sensitive potential field ensuring preservation of the original class density distributions, yet allowing data to rearrange and merge joining together their soft class partitions. As a result we achieved a model that reduces the labelled datasets much further than any competitive approaches yet with the maximum retention of the original class densities and hence the classification performance. PLDC leaves the reduced dataset with the soft accumulative class weights allowing for efficient online updates and as shown in a series of experiments if coupled with Parzen Density Classifier (PDC) significantly outperforms competitive data condensation methods in terms of classification performance at the

5. Uncertainty and validation. Effect of model complexity on uncertainty estimates

International Nuclear Information System (INIS)

Elert, M.

1996-09-01

In the Model Complexity subgroup of BIOMOVS II, models of varying complexity have been applied to the problem of downward transport of radionuclides in soils. A scenario describing a case of surface contamination of a pasture soil was defined. Three different radionuclides with different environmental behavior and radioactive half-lives were considered: Cs-137, Sr-90 and I-129. The intention was to give a detailed specification of the parameters required by different kinds of model, together with reasonable values for the parameter uncertainty. A total of seven modelling teams participated in the study using 13 different models. Four of the modelling groups performed uncertainty calculations using nine different modelling approaches. The models used range in complexity from analytical solutions of a 2-box model using annual average data to numerical models coupling hydrology and transport using data varying on a daily basis. The complex models needed to consider all aspects of radionuclide transport in a soil with a variable hydrology are often impractical to use in safety assessments. Instead simpler models, often box models, are preferred. The comparison of predictions made with the complex models and the simple models for this scenario show that the predictions in many cases are very similar, e g in the predictions of the evolution of the root zone concentration. However, in other cases differences of many orders of magnitude can appear. One example is the prediction of the flux to the groundwater of radionuclides being transported through the soil column. Some issues that have come to focus in this study: There are large differences in the predicted soil hydrology and as a consequence also in the radionuclide transport, which suggests that there are large uncertainties in the calculation of effective precipitation and evapotranspiration. The approach used for modelling the water transport in the root zone has an impact on the predictions of the decline in root

6. Loss given default models incorporating macroeconomic variables for credit cards

OpenAIRE

Crook, J.; Bellotti, T.

2012-01-01

Based on UK data for major retail credit cards, we build several models of Loss Given Default based on account level data, including Tobit, a decision tree model, a Beta and fractional logit transformation. We find that Ordinary Least Squares models with macroeconomic variables perform best for forecasting Loss Given Default at the account and portfolio levels on independent hold-out data sets. The inclusion of macroeconomic conditions in the model is important, since it provides a means to m...

7. Interacting ghost dark energy models with variable G and Λ

Science.gov (United States)

Sadeghi, J.; Khurshudyan, M.; Movsisyan, A.; Farahani, H.

2013-12-01

In this paper we consider several phenomenological models of variable Λ. Model of a flat Universe with variable Λ and G is accepted. It is well known, that varying G and Λ gives rise to modified field equations and modified conservation laws, which gives rise to many different manipulations and assumptions in literature. We will consider two component fluid, which parameters will enter to Λ. Interaction between fluids with energy densities ρ1 and ρ2 assumed as Q = 3Hb(ρ1+ρ2). We have numerical analyze of important cosmological parameters like EoS parameter of the composed fluid and deceleration parameter q of the model.

8. Variable selection for mixture and promotion time cure rate models.

Science.gov (United States)

Masud, Abdullah; Tu, Wanzhu; Yu, Zhangsheng

2016-11-16

Failure-time data with cured patients are common in clinical studies. Data from these studies are typically analyzed with cure rate models. Variable selection methods have not been well developed for cure rate models. In this research, we propose two least absolute shrinkage and selection operators based methods, for variable selection in mixture and promotion time cure models with parametric or nonparametric baseline hazards. We conduct an extensive simulation study to assess the operating characteristics of the proposed methods. We illustrate the use of the methods using data from a study of childhood wheezing. © The Author(s) 2016.

9. A novel methodology improves reservoir characterization models using geologic fuzzy variables

Energy Technology Data Exchange (ETDEWEB)

Soto B, Rodolfo [DIGITOIL, Maracaibo (Venezuela); Soto O, David A. [Texas A and M University, College Station, TX (United States)

2004-07-01

One of the research projects carried out in Cusiana field to explain its rapid decline during the last years was to get better permeability models. The reservoir of this field has a complex layered system that it is not easy to model using conventional methods. The new technique included the development of porosity and permeability maps from cored wells following the same trend of the sand depositions for each facie or layer according to the sedimentary facie and the depositional system models. Then, we used fuzzy logic to reproduce those maps in three dimensions as geologic fuzzy variables. After multivariate statistical and factor analyses, we found independence and a good correlation coefficient between the geologic fuzzy variables and core permeability and porosity. This means, the geologic fuzzy variable could explain the fabric, the grain size and the pore geometry of the reservoir rock trough the field. Finally, we developed a neural network permeability model using porosity, gamma ray and the geologic fuzzy variable as input variables. This model has a cross-correlation coefficient of 0.873 and average absolute error of 33% compared with the actual model with a correlation coefficient of 0.511 and absolute error greater than 250%. We tested different methodologies, but this new one showed dramatically be a promiser way to get better permeability models. The use of the models have had a high impact in the explanation of well performance and workovers, and reservoir simulation models. (author)

10. Complexity analyses show two distinct types of nonlinear dynamics in short heart period variability recordings

Science.gov (United States)

Porta, Alberto; Bari, Vlasta; Marchi, Andrea; De Maria, Beatrice; Cysarz, Dirk; Van Leeuwen, Peter; Takahashi, Anielle C. M.; Catai, Aparecida M.; Gnecchi-Ruscone, Tomaso

2015-01-01

Two diverse complexity metrics quantifying time irreversibility and local prediction, in connection with a surrogate data approach, were utilized to detect nonlinear dynamics in short heart period (HP) variability series recorded in fetuses, as a function of the gestational period, and in healthy humans, as a function of the magnitude of the orthostatic challenge. The metrics indicated the presence of two distinct types of nonlinear HP dynamics characterized by diverse ranges of time scales. These findings stress the need to render more specific the analysis of nonlinear components of HP dynamics by accounting for different temporal scales. PMID:25806002

11. The complex variable boundary element method: Applications in determining approximative boundaries

Science.gov (United States)

1984-01-01

The complex variable boundary element method (CVBEM) is used to determine approximation functions for boundary value problems of the Laplace equation such as occurs in potential theory. By determining an approximative boundary upon which the CVBEM approximator matches the desired constant (level curves) boundary conditions, the CVBEM is found to provide the exact solution throughout the interior of the transformed problem domain. Thus, the acceptability of the CVBEM approximation is determined by the closeness-of-fit of the approximative boundary to the study problem boundary. ?? 1984.

12. a modified intervention model for gross domestic product variable

African Journals Online (AJOL)

observations on a variable that have been measured at ... assumption that successive values in the data file ... these interventions, one may try to evaluate the effect of ... generalized series by comparing the distinct periods. A ... the process of checking for adequacy of the model based .... As a result, the model's forecast will.

13. Simple model for crop photosynthesis in terms of weather variables ...

African Journals Online (AJOL)

A theoretical mathematical model for describing crop photosynthetic rate in terms of the weather variables and crop characteristics is proposed. The model utilizes a series of efficiency parameters, each of which reflect the fraction of possible photosynthetic rate permitted by the different weather elements or crop architecture.

14. Model for expressing leaf photosynthesis in terms of weather variables

African Journals Online (AJOL)

A theoretical mathematical model for describing photosynthesis in individual leaves in terms of weather variables is proposed. The model utilizes a series of efficiency parameters, each of which reflect the fraction of potential photosynthetic rate permitted by the different environmental elements. These parameters are useful ...

15. Transfer of skill engendered by complex task training under conditions of variable priority.

Science.gov (United States)

Boot, Walter R; Basak, Chandramallika; Erickson, Kirk I; Neider, Mark; Simons, Daniel J; Fabiani, Monica; Gratton, Gabriele; Voss, Michelle W; Prakash, Ruchika; Lee, HyunKyu; Low, Kathy A; Kramer, Arthur F

2010-11-01

We explored the theoretical underpinnings of a commonly used training strategy by examining issues of training and transfer of skill in the context of a complex video game (Space Fortress, Donchin, 1989). Participants trained using one of two training regimens: Full Emphasis Training (FET) or Variable Priority Training (VPT). Transfer of training was assessed with a large battery of cognitive and psychomotor tasks ranging from basic laboratory paradigms measuring reasoning, memory, and attention to complex real-world simulations. Consistent with previous studies, VPT accelerated learning and maximized task mastery. However, the hypothesis that VPT would result in broader transfer of training received limited support. Rather, transfer was most evident in tasks that were most similar to the Space Fortress game itself. Results are discussed in terms of potential limitations of the VPT approach. Copyright © 2010 Elsevier B.V. All rights reserved.

16. Spatiotemporal Variability of Turbulence Kinetic Energy Budgets in the Convective Boundary Layer over Both Simple and Complex Terrain

Energy Technology Data Exchange (ETDEWEB)

Rai, Raj K. [Pacific Northwest National Laboratory, Richland, Washington; Berg, Larry K. [Pacific Northwest National Laboratory, Richland, Washington; Pekour, Mikhail [Pacific Northwest National Laboratory, Richland, Washington; Shaw, William J. [Pacific Northwest National Laboratory, Richland, Washington; Kosovic, Branko [National Center for Atmospheric Research, Boulder, Colorado; Mirocha, Jeffrey D. [Lawrence Livermore National Laboratory, Livermore, California; Ennis, Brandon L. [Sandia National Laboratories, Albuquerque, New Mexico

2017-12-01

The assumption of sub-grid scale (SGS) horizontal homogeneity within a model grid cell, which forms the basis of SGS turbulence closures used by mesoscale models, becomes increasingly tenuous as grid spacing is reduced to a few kilometers or less, such as in many emerging high-resolution applications. Herein, we use the turbulence kinetic energy (TKE) budget equation to study the spatio-temporal variability in two types of terrain—complex (Columbia Basin Wind Energy Study [CBWES] site, north-eastern Oregon) and flat (ScaledWind Farm Technologies [SWiFT] site, west Texas) using the Weather Research and Forecasting (WRF) model. In each case six-nested domains (three domains each for mesoscale and large-eddy simulation [LES]) are used to downscale the horizontal grid spacing from 10 km to 10 m using the WRF model framework. The model output was used to calculate the values of the TKE budget terms in vertical and horizontal planes as well as the averages of grid cells contained in the four quadrants (a quarter area) of the LES domain. The budget terms calculated along the planes and the mean profile of budget terms show larger spatial variability at CBWES site than at the SWiFT site. The contribution of the horizontal derivative of the shear production term to the total production shear was found to be 45% and 15% of the total shear, at the CBWES and SWiFT sites, respectively, indicating that the horizontal derivatives applied in the budget equation should not be ignored in mesoscale model parameterizations, especially for cases with complex terrain with <10 km scale.

17. Modeling the Structure and Complexity of Engineering Routine Design Problems

NARCIS (Netherlands)

Jauregui Becker, Juan Manuel; Wits, Wessel Willems; van Houten, Frederikus J.A.M.

2011-01-01

This paper proposes a model to structure routine design problems as well as a model of its design complexity. The idea is that having a proper model of the structure of such problems enables understanding its complexity, and likewise, a proper understanding of its complexity enables the development

18. Efficient Business Service Consumption by Customization with Variability Modelling

Directory of Open Access Journals (Sweden)

Michael Stollberg

2010-07-01

Full Text Available The establishment of service orientation in industry determines the need for efficient engineering technologies that properly support the whole life cycle of service provision and consumption. A central challenge is adequate support for the efficient employment of komplex services in their individual application context. This becomes particularly important for large-scale enterprise technologies where generic services are designed for reuse in several business scenarios. In this article we complement our work regarding Service Variability Modelling presented in a previous publication. There we presented an approach for the customization of services for individual application contexts by creating simplified variants, based on model-driven variability management. That work presents our revised service variability metamodel, new features of the variability tools and an applicability study, which reveals that substantial improvements on the efficiency of standard business service consumption under both usability and economic aspects can be achieved.

19. Uncertainty and validation. Effect of model complexity on uncertainty estimates

Energy Technology Data Exchange (ETDEWEB)

Elert, M. [Kemakta Konsult AB, Stockholm (Sweden)] [ed.

1996-09-01

In the Model Complexity subgroup of BIOMOVS II, models of varying complexity have been applied to the problem of downward transport of radionuclides in soils. A scenario describing a case of surface contamination of a pasture soil was defined. Three different radionuclides with different environmental behavior and radioactive half-lives were considered: Cs-137, Sr-90 and I-129. The intention was to give a detailed specification of the parameters required by different kinds of model, together with reasonable values for the parameter uncertainty. A total of seven modelling teams participated in the study using 13 different models. Four of the modelling groups performed uncertainty calculations using nine different modelling approaches. The models used range in complexity from analytical solutions of a 2-box model using annual average data to numerical models coupling hydrology and transport using data varying on a daily basis. The complex models needed to consider all aspects of radionuclide transport in a soil with a variable hydrology are often impractical to use in safety assessments. Instead simpler models, often box models, are preferred. The comparison of predictions made with the complex models and the simple models for this scenario show that the predictions in many cases are very similar, e g in the predictions of the evolution of the root zone concentration. However, in other cases differences of many orders of magnitude can appear. One example is the prediction of the flux to the groundwater of radionuclides being transported through the soil column. Some issues that have come to focus in this study: There are large differences in the predicted soil hydrology and as a consequence also in the radionuclide transport, which suggests that there are large uncertainties in the calculation of effective precipitation and evapotranspiration. The approach used for modelling the water transport in the root zone has an impact on the predictions of the decline in root

20. Modelling the effects of spatial variability on radionuclide migration

International Nuclear Information System (INIS)

1998-01-01

The NEA workshop reflect the present status in national waste management program, specifically in spatial variability and performance assessment of geologic disposal sites for deed repository system the four sessions were: Spatial Variability: Its Definition and Significance to Performance Assessment and Site Characterisation; Experience with the Modelling of Radionuclide Migration in the Presence of Spatial Variability in Various Geological Environments; New Areas for Investigation: Two Personal Views; What is Wanted and What is Feasible: Views and Future Plans in Selected Waste Management Organisations. The 26 papers presented on the four oral sessions and on the poster session have been abstracted and indexed individually for the INIS database. (R.P.)

1. From Transition Systems to Variability Models and from Lifted Model Checking Back to UPPAAL

DEFF Research Database (Denmark)

Dimovski, Aleksandar; Wasowski, Andrzej

2017-01-01

efficient lifted (family-based) model checking for real-time variability models. This reduces the cost of maintaining specialized family-based real-time model checkers. Real-time variability models can be model checked using the standard UPPAAL. We have implemented abstractions as syntactic source...

2. Internal variability of a 3-D ocean model

Directory of Open Access Journals (Sweden)

Bjarne Büchmann

2016-11-01

Full Text Available The Defence Centre for Operational Oceanography runs operational forecasts for the Danish waters. The core setup is a 60-layer baroclinic circulation model based on the General Estuarine Transport Model code. At intervals, the model setup is tuned to improve ‘model skill’ and overall performance. It has been an area of concern that the uncertainty inherent to the stochastical/chaotic nature of the model is unknown. Thus, it is difficult to state with certainty that a particular setup is improved, even if the computed model skill increases. This issue also extends to the cases, where the model is tuned during an iterative process, where model results are fed back to improve model parameters, such as bathymetry.An ensemble of identical model setups with slightly perturbed initial conditions is examined. It is found that the initial perturbation causes the models to deviate from each other exponentially fast, causing differences of several PSUs and several kelvin within a few days of simulation. The ensemble is run for a full year, and the long-term variability of salinity and temperature is found for different regions within the modelled area. Further, the developing time scale is estimated for each region, and great regional differences are found – in both variability and time scale. It is observed that periods with very high ensemble variability are typically short-term and spatially limited events.A particular event is examined in detail to shed light on how the ensemble ‘behaves’ in periods with large internal model variability. It is found that the ensemble does not seem to follow any particular stochastic distribution: both the ensemble variability (standard deviation or range as well as the ensemble distribution within that range seem to vary with time and place. Further, it is observed that a large spatial variability due to mesoscale features does not necessarily correlate to large ensemble variability. These findings bear

3. Understanding and forecasting polar stratospheric variability with statistical models

Directory of Open Access Journals (Sweden)

C. Blume

2012-07-01

Full Text Available The variability of the north-polar stratospheric vortex is a prominent aspect of the middle atmosphere. This work investigates a wide class of statistical models with respect to their ability to model geopotential and temperature anomalies, representing variability in the polar stratosphere. Four partly nonstationary, nonlinear models are assessed: linear discriminant analysis (LDA; a cluster method based on finite elements (FEM-VARX; a neural network, namely the multi-layer perceptron (MLP; and support vector regression (SVR. These methods model time series by incorporating all significant external factors simultaneously, including ENSO, QBO, the solar cycle, volcanoes, to then quantify their statistical importance. We show that variability in reanalysis data from 1980 to 2005 is successfully modeled. The period from 2005 to 2011 can be hindcasted to a certain extent, where MLP performs significantly better than the remaining models. However, variability remains that cannot be statistically hindcasted within the current framework, such as the unexpected major warming in January 2009. Finally, the statistical model with the best generalization performance is used to predict a winter 2011/12 with warm and weak vortex conditions. A vortex breakdown is predicted for late January, early February 2012.

4. A coupled mass transfer and surface complexation model for uranium (VI) removal from wastewaters

International Nuclear Information System (INIS)

Lenhart, J.; Figueroa, L.A.; Honeyman, B.D.

1994-01-01

A remediation technique has been developed for removing uranium (VI) from complex contaminated groundwater using flake chitin as a biosorbent in batch and continuous flow configurations. With this system, U(VI) removal efficiency can be predicted using a model that integrates surface complexation models, mass transport limitations and sorption kinetics. This integration allows the reactor model to predict removal efficiencies for complex groundwaters with variable U(VI) concentrations and other constituents. The system has been validated using laboratory-derived kinetic data in batch and CSTR systems to verify the model predictions of U(VI) uptake from simulated contaminated groundwater

5. High-resolution spatial databases of monthly climate variables (1961-2010) over a complex terrain region in southwestern China

Science.gov (United States)

Wu, Wei; Xu, An-Ding; Liu, Hong-Bin

2015-01-01

Climate data in gridded format are critical for understanding climate change and its impact on eco-environment. The aim of the current study is to develop spatial databases for three climate variables (maximum, minimum temperatures, and relative humidity) over a large region with complex topography in southwestern China. Five widely used approaches including inverse distance weighting, ordinary kriging, universal kriging, co-kriging, and thin-plate smoothing spline were tested. Root mean square error (RMSE), mean absolute error (MAE), and mean absolute percentage error (MAPE) showed that thin-plate smoothing spline with latitude, longitude, and elevation outperformed other models. Average RMSE, MAE, and MAPE of the best models were 1.16 °C, 0.74 °C, and 7.38 % for maximum temperature; 0.826 °C, 0.58 °C, and 6.41 % for minimum temperature; and 3.44, 2.28, and 3.21 % for relative humidity, respectively. Spatial datasets of annual and monthly climate variables with 1-km resolution covering the period 1961-2010 were then obtained using the best performance methods. Comparative study showed that the current outcomes were in well agreement with public datasets. Based on the gridded datasets, changes in temperature variables were investigated across the study area. Future study might be needed to capture the uncertainty induced by environmental conditions through remote sensing and knowledge-based methods.

6. Mediterranean climate modelling: variability and climate change scenarios

International Nuclear Information System (INIS)

Somot, S.

2005-12-01

Air-sea fluxes, open-sea deep convection and cyclo-genesis are studied in the Mediterranean with the development of a regional coupled model (AORCM). It accurately simulates these processes and their climate variabilities are quantified and studied. The regional coupling shows a significant impact on the number of winter intense cyclo-genesis as well as on associated air-sea fluxes and precipitation. A lower inter-annual variability than in non-coupled models is simulated for fluxes and deep convection. The feedbacks driving this variability are understood. The climate change response is then analysed for the 21. century with the non-coupled models: cyclo-genesis decreases, associated precipitation increases in spring and autumn and decreases in summer. Moreover, a warming and salting of the Mediterranean as well as a strong weakening of its thermohaline circulation occur. This study also concludes with the necessity of using AORCMs to assess climate change impacts on the Mediterranean. (author)

7. Plasticity models of material variability based on uncertainty quantification techniques

Energy Technology Data Exchange (ETDEWEB)

Jones, Reese E. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Rizzi, Francesco [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Boyce, Brad [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Templeton, Jeremy Alan [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Ostien, Jakob [Sandia National Lab. (SNL-CA), Livermore, CA (United States)

2017-11-01

The advent of fabrication techniques like additive manufacturing has focused attention on the considerable variability of material response due to defects and other micro-structural aspects. This variability motivates the development of an enhanced design methodology that incorporates inherent material variability to provide robust predictions of performance. In this work, we develop plasticity models capable of representing the distribution of mechanical responses observed in experiments using traditional plasticity models of the mean response and recently developed uncertainty quantification (UQ) techniques. Lastly, we demonstrate that the new method provides predictive realizations that are superior to more traditional ones, and how these UQ techniques can be used in model selection and assessing the quality of calibrated physical parameters.

8. Stratified flows with variable density: mathematical modelling and numerical challenges.

Science.gov (United States)

2017-04-01

Stratified flows appear in a wide variety of fundamental problems in hydrological and geophysical sciences. They may involve from hyperconcentrated floods carrying sediment causing collapse, landslides and debris flows, to suspended material in turbidity currents where turbulence is a key process. Also, in stratified flows variable horizontal density is present. Depending on the case, density varies according to the volumetric concentration of different components or species that can represent transported or suspended materials or soluble substances. Multilayer approaches based on the shallow water equations provide suitable models but are not free from difficulties when moving to the numerical resolution of the governing equations. Considering the variety of temporal and spatial scales, transfer of mass and energy among layers may strongly differ from one case to another. As a consequence, in order to provide accurate solutions, very high order methods of proved quality are demanded. Under these complex scenarios it is necessary to observe that the numerical solution provides the expected order of accuracy but also converges to the physically based solution, which is not an easy task. To this purpose, this work will focus in the use of Energy balanced augmented solvers, in particular, the Augmented Roe Flux ADER scheme. References: J. Murillo , P. García-Navarro, Wave Riemann description of friction terms in unsteady shallow flows: Application to water and mud/debris floods. J. Comput. Phys. 231 (2012) 1963-2001. J. Murillo B. Latorre, P. García-Navarro. A Riemann solver for unsteady computation of 2D shallow flows with variable density. J. Comput. Phys.231 (2012) 4775-4807. A. Navas-Montilla, J. Murillo, Energy balanced numerical schemes with very high order. The Augmented Roe Flux ADER scheme. Application to the shallow water equations, J. Comput. Phys. 290 (2015) 188-218. A. Navas-Montilla, J. Murillo, Asymptotically and exactly energy balanced augmented flux

9. Modeling Turbulent Combustion for Variable Prandtl and Schmidt Number

Science.gov (United States)

Hassan, H. A.

2004-01-01

This report consists of two abstracts submitted for possible presentation at the AIAA Aerospace Science Meeting to be held in January 2005. Since the submittal of these abstracts we are continuing refinement of the model coefficients derived for the case of a variable Turbulent Prandtl number. The test cases being investigated are a Mach 9.2 flow over a degree ramp and a Mach 8.2 3-D calculation of crossing shocks. We have developed an axisymmetric code for treating axisymmetric flows. In addition the variable Schmidt number formulation was incorporated in the code and we are in the process of determining the model constants.

10. The Properties of Model Selection when Retaining Theory Variables

DEFF Research Database (Denmark)

Hendry, David F.; Johansen, Søren

Economic theories are often fitted directly to data to avoid possible model selection biases. We show that embedding a theory model that specifies the correct set of m relevant exogenous variables, x{t}, within the larger set of m+k candidate variables, (x{t},w{t}), then selection over the second...... set by their statistical significance can be undertaken without affecting the estimator distribution of the theory parameters. This strategy returns the theory-parameter estimates when the theory is correct, yet protects against the theory being under-specified because some w{t} are relevant....

11. Modeling competitive substitution in a polyelectrolyte complex

International Nuclear Information System (INIS)

Peng, B.; Muthukumar, M.

2015-01-01

We have simulated the invasion of a polyelectrolyte complex made of a polycation chain and a polyanion chain, by another longer polyanion chain, using the coarse-grained united atom model for the chains and the Langevin dynamics methodology. Our simulations reveal many intricate details of the substitution reaction in terms of conformational changes of the chains and competition between the invading chain and the chain being displaced for the common complementary chain. We show that the invading chain is required to be sufficiently longer than the chain being displaced for effecting the substitution. Yet, having the invading chain to be longer than a certain threshold value does not reduce the substitution time much further. While most of the simulations were carried out in salt-free conditions, we show that presence of salt facilitates the substitution reaction and reduces the substitution time. Analysis of our data shows that the dominant driving force for the substitution process involving polyelectrolytes lies in the release of counterions during the substitution

12. Modelling of information processes management of educational complex

Directory of Open Access Journals (Sweden)

Оксана Николаевна Ромашкова

2014-12-01

Full Text Available This work concerns information model of the educational complex which includes several schools. A classification of educational complexes formed in Moscow is given. There are also a consideration of the existing organizational structure of the educational complex and a suggestion of matrix management structure. Basic management information processes of the educational complex were conceptualized.

13. Sandpile model for relaxation in complex systems

International Nuclear Information System (INIS)

Vazquez, A.; Sotolongo-Costa, O.; Brouers, F.

1997-10-01

The relaxation in complex systems is, in general, nonexponential. After an initial rapid decay the system relaxes slowly following a long time tail. In the present paper a sandpile moderation of the relaxation in complex systems is analysed. Complexity is introduced by a process of avalanches in the Bethe lattice and a feedback mechanism which leads to slower decay with increasing time. In this way, some features of relaxation in complex systems: long time tails relaxation, aging, and fractal distribution of characteristic times, are obtained by simple computer simulations. (author)

14. Complexity and time asymmetry of heart rate variability are altered in acute mental stress.

Science.gov (United States)

Visnovcova, Z; Mestanik, M; Javorka, M; Mokra, D; Gala, M; Jurko, A; Calkovska, A; Tonhajzerova, I

2014-07-01

We aimed to study the complexity and time asymmetry of short-term heart rate variability (HRV) as an index of complex neurocardiac control in response to stress using symbolic dynamics and time irreversibility methods. ECG was recorded at rest and during and after two stressors (Stroop, arithmetic test) in 70 healthy students. Symbolic dynamics parameters (NUPI, NCI, 0V%, 1V%, 2LV%, 2UV%), and time irreversibility indices (P%, G%, E) were evaluated. Additionally, HRV magnitude was quantified by linear parameters: spectral powers in low (LF) and high frequency (HF) bands. Our results showed a reduction of HRV complexity in stress (lower NUPI with both stressors, lower NCI with Stroop). Pattern classification analysis revealed significantly higher 0V% and lower 2LV% with both stressors, indicating a shift in sympathovagal balance, and significantly higher 1V% and lower 2UV% with Stroop. An unexpected result was found in time irreversibility: significantly lower G% and E with both stressors, P% index significantly declined only with arithmetic test. Linear HRV analysis confirmed vagal withdrawal (lower HF) with both stressors; LF significantly increased with Stroop and decreased with arithmetic test. Correlation analysis revealed no significant associations between symbolic dynamics and time irreversibility. Concluding, symbolic dynamics and time irreversibility could provide independent information related to alterations of neurocardiac control integrity in stress-related disease.

15. Complexity and time asymmetry of heart rate variability are altered in acute mental stress

International Nuclear Information System (INIS)

Visnovcova, Z; Mestanik, M; Javorka, M; Mokra, D; Calkovska, A; Tonhajzerova, I; Gala, M; Jurko, A

2014-01-01

We aimed to study the complexity and time asymmetry of short-term heart rate variability (HRV) as an index of complex neurocardiac control in response to stress using symbolic dynamics and time irreversibility methods. ECG was recorded at rest and during and after two stressors (Stroop, arithmetic test) in 70 healthy students. Symbolic dynamics parameters (NUPI, NCI, 0V%, 1V%, 2LV%, 2UV%), and time irreversibility indices (P%, G%, E) were evaluated. Additionally, HRV magnitude was quantified by linear parameters: spectral powers in low (LF) and high frequency (HF) bands. Our results showed a reduction of HRV complexity in stress (lower NUPI with both stressors, lower NCI with Stroop). Pattern classification analysis revealed significantly higher 0V% and lower 2LV% with both stressors, indicating a shift in sympathovagal balance, and significantly higher 1V% and lower 2UV% with Stroop. An unexpected result was found in time irreversibility: significantly lower G% and E with both stressors, P% index significantly declined only with arithmetic test. Linear HRV analysis confirmed vagal withdrawal (lower HF) with both stressors; LF significantly increased with Stroop and decreased with arithmetic test. Correlation analysis revealed no significant associations between symbolic dynamics and time irreversibility. Concluding, symbolic dynamics and time irreversibility could provide independent information related to alterations of neurocardiac control integrity in stress-related disease. (paper)

16. Ocean carbon and heat variability in an Earth System Model

Science.gov (United States)

Thomas, J. L.; Waugh, D.; Gnanadesikan, A.

2016-12-01

Ocean carbon and heat content are very important for regulating global climate. Furthermore, due to lack of observations and dependence on parameterizations, there has been little consensus in the modeling community on the magnitude of realistic ocean carbon and heat content variability, particularly in the Southern Ocean. We assess the differences between global oceanic heat and carbon content variability in GFDL ESM2Mc using a 500-year, pre-industrial control simulation. The global carbon and heat content are directly out of phase with each other; however, in the Southern Ocean the heat and carbon content are in phase. The global heat mutli-decadal variability is primarily explained by variability in the tropics and mid-latitudes, while the variability in global carbon content is primarily explained by Southern Ocean variability. In order to test the robustness of this relationship, we use three additional pre-industrial control simulations using different mesoscale mixing parameterizations. Three pre-industrial control simulations are conducted with the along-isopycnal diffusion coefficient (Aredi) set to constant values of 400, 800 (control) and 2400 m2 s-1. These values for Aredi are within the range of parameter settings commonly used in modeling groups. Finally, one pre-industrial control simulation is conducted where the minimum in the Gent-McWilliams parameterization closure scheme (AGM) increased to 600 m2 s-1. We find that the different simulations have very different multi-decadal variability, especially in the Weddell Sea where the characteristics of deep convection are drastically changed. While the temporal frequency and amplitude global heat and carbon content changes significantly, the overall spatial pattern of variability remains unchanged between the simulations.

17. Incorporating Latent Variables into Discrete Choice Models - A Simultaneous Estimation Approach Using SEM Software

Directory of Open Access Journals (Sweden)

Dirk Temme

2008-12-01

Full Text Available Integrated choice and latent variable (ICLV models represent a promising new class of models which merge classic choice models with the structural equation approach (SEM for latent variables. Despite their conceptual appeal, applications of ICLV models in marketing remain rare. We extend previous ICLV applications by first estimating a multinomial choice model and, second, by estimating hierarchical relations between latent variables. An empirical study on travel mode choice clearly demonstrates the value of ICLV models to enhance the understanding of choice processes. In addition to the usually studied directly observable variables such as travel time, we show how abstract motivations such as power and hedonism as well as attitudes such as a desire for flexibility impact on travel mode choice. Furthermore, we show that it is possible to estimate such a complex ICLV model with the widely available structural equation modeling package Mplus. This finding is likely to encourage more widespread application of this appealing model class in the marketing field.

18. Modelling Inter-relationships among water, governance, human development variables in developing countries with Bayesian networks.

Science.gov (United States)

Dondeynaz, C.; Lopez-Puga, J.; Carmona-Moreno, C.

2012-04-01

Improving Water and Sanitation Services (WSS), being a complex and interdisciplinary issue, passes through collaboration and coordination of different sectors (environment, health, economic activities, governance, and international cooperation). This inter-dependency has been recognised with the adoption of the "Integrated Water Resources Management" principles that push for the integration of these various dimensions involved in WSS delivery to ensure an efficient and sustainable management. The understanding of these interrelations appears as crucial for decision makers in the water sector in particular in developing countries where WSS still represent an important leverage for livelihood improvement. In this framework, the Joint Research Centre of the European Commission has developed a coherent database (WatSan4Dev database) containing 29 indicators from environmental, socio-economic, governance and financial aid flows data focusing on developing countries (Celine et al, 2011 under publication). The aim of this work is to model the WatSan4Dev dataset using probabilistic models to identify the key variables influencing or being influenced by the water supply and sanitation access levels. Bayesian Network Models are suitable to map the conditional dependencies between variables and also allows ordering variables by level of influence on the dependent variable. Separated models have been built for water supply and for sanitation because of different behaviour. The models are validated if complying with statistical criteria but either with scientific knowledge and literature. A two steps approach has been adopted to build the structure of the model; Bayesian network is first built for each thematic cluster of variables (e.g governance, agricultural pressure, or human development) keeping a detailed level for interpretation later one. A global model is then built based on significant indicators of each cluster being previously modelled. The structure of the

19. Time series analysis of embodied interaction: Movement variability and complexity matching as dyadic properties

Directory of Open Access Journals (Sweden)

Leonardo Zapata-Fonseca

2016-12-01

Full Text Available There is a growing consensus that a fuller understanding of social cognition depends on more systematic studies of real-time social interaction. Such studies require methods that can deal with the complex dynamics taking place at multiple interdependent temporal and spatial scales, spanning sub-personal, personal, and dyadic levels of analysis. We demonstrate the value of adopting an extended multi-scale approach by re-analyzing movement time series generated in a study of embodied dyadic interaction in a minimal virtual reality environment (a perceptual crossing experiment. Reduced movement variability revealed an interdependence between social awareness and social coordination that cannot be accounted for by either subjective or objective factors alone: it picks out interactions in which subjective and objective conditions are convergent (i.e. elevated coordination is perceived as clearly social, and impaired coordination is perceived as socially ambiguous. This finding is consistent with the claim that interpersonal interaction can be partially constitutive of direct social perception. Clustering statistics (Allan Factor of salient events revealed fractal scaling. Complexity matching defined as the similarity between these scaling laws was significantly more pronounced in pairs of participants as compared to surrogate dyads. This further highlights the multi-scale and distributed character of social interaction and extends previous complexity matching results from dyadic conversation to nonverbal social interaction dynamics. Trials with successful joint interaction were also associated with an increase in local coordination. Consequently, a local coordination pattern emerges on the background of complex dyadic interactions in the PCE task and makes joint successful performance possible.

20. Classification criteria of syndromes by latent variable models

DEFF Research Database (Denmark)

Petersen, Janne

2010-01-01

patient's characteristics. These methods may erroneously reduce multiplicity either by combining markers of different phenotypes or by mixing HALS with other processes such as aging. Latent class models identify homogenous groups of patients based on sets of variables, for example symptoms. As no gold......The thesis has two parts; one clinical part: studying the dimensions of human immunodeficiency virus associated lipodystrophy syndrome (HALS) by latent class models, and a more statistical part: investigating how to predict scores of latent variables so these can be used in subsequent regression...... standard exists for diagnosing HALS the normally applied diagnostic models cannot be used. Latent class models, which have never before been used to diagnose HALS, make it possible, under certain assumptions, to: statistically evaluate the number of phenotypes, test for mixing of HALS with other processes...

1. Internal variability in a regional climate model over West Africa

Energy Technology Data Exchange (ETDEWEB)

Vanvyve, Emilie; Ypersele, Jean-Pascal van [Universite catholique de Louvain, Institut d' astronomie et de geophysique Georges Lemaitre, Louvain-la-Neuve (Belgium); Hall, Nicholas [Laboratoire d' Etudes en Geophysique et Oceanographie Spatiales/Centre National d' Etudes Spatiales, Toulouse Cedex 9 (France); Messager, Christophe [University of Leeds, Institute for Atmospheric Science, Environment, School of Earth and Environment, Leeds (United Kingdom); Leroux, Stephanie [Universite Joseph Fourier, Laboratoire d' etude des Transferts en Hydrologie et Environnement, BP53, Grenoble Cedex 9 (France)

2008-02-15

Sensitivity studies with regional climate models are often performed on the basis of a few simulations for which the difference is analysed and the statistical significance is often taken for granted. In this study we present some simple measures of the confidence limits for these types of experiments by analysing the internal variability of a regional climate model run over West Africa. Two 1-year long simulations, differing only in their initial conditions, are compared. The difference between the two runs gives a measure of the internal variability of the model and an indication of which timescales are reliable for analysis. The results are analysed for a range of timescales and spatial scales, and quantitative measures of the confidence limits for regional model simulations are diagnosed for a selection of study areas for rainfall, low level temperature and wind. As the averaging period or spatial scale is increased, the signal due to internal variability gets smaller and confidence in the simulations increases. This occurs more rapidly for variations in precipitation, which appear essentially random, than for dynamical variables, which show some organisation on larger scales. (orig.)

2. Automatic Welding Control Using a State Variable Model.

Science.gov (United States)

1979-06-01

A-A10 610 NAVEAL POSTGRADUATE SCH4O.M CEAY CA0/ 13/ SAUTOMATIC WELDING CONTROL USING A STATE VARIABLE MODEL.W()JUN 79 W V "my UNCLASSIFIED...taverse Drive Unit // Jbint Path /Fixed Track 34 (servomotor positioning). Additional controls of heave (vertical), roll (angular rotation about the

3. Viscous cosmological models with a variable cosmological term ...

African Journals Online (AJOL)

Einstein's field equations for a Friedmann-Lamaitre Robertson-Walker universe filled with a dissipative fluid with a variable cosmological term L described by full Israel-Stewart theory are considered. General solutions to the field equations for the flat case have been obtained. The solution corresponds to the dust free model ...

4. Appraisal and Reliability of Variable Engagement Model Prediction ...

African Journals Online (AJOL)

The variable engagement model based on the stress - crack opening displacement relationship and, which describes the behaviour of randomly oriented steel fibres composite subjected to uniaxial tension has been evaluated so as to determine the safety indices associated when the fibres are subjected to pullout and with ...

5. Higher-dimensional cosmological model with variable gravitational ...

variable G and bulk viscosity in Lyra geometry. Exact solutions for ... a comparative study of Robertson–Walker models with a constant deceleration .... where H is defined as H =(˙A/A)+(1/3)( ˙B/B) and β0,H0 are representing present values of β ...

6. Modeling Complex Chemical Systems: Problems and Solutions

Science.gov (United States)

van Dijk, Jan

2016-09-01

Non-equilibrium plasmas in complex gas mixtures are at the heart of numerous contemporary technologies. They typically contain dozens to hundreds of species, involved in hundreds to thousands of reactions. Chemists and physicists have always been interested in what are now called chemical reduction techniques (CRT's). The idea of such CRT's is that they reduce the number of species that need to be considered explicitly without compromising the validity of the model. This is usually achieved on the basis of an analysis of the reaction time scales of the system under study, which identifies species that are in partial equilibrium after a given time span. The first such CRT that has been widely used in plasma physics was developed in the 1960's and resulted in the concept of effective ionization and recombination rates. It was later generalized to systems in which multiple levels are effected by transport. In recent years there has been a renewed interest in tools for chemical reduction and reaction pathway analysis. An example of the latter is the PumpKin tool. Another trend is that techniques that have previously been developed in other fields of science are adapted as to be able to handle the plasma state of matter. Examples are the Intrinsic Low Dimension Manifold (ILDM) method and its derivatives, which originate from combustion engineering, and the general-purpose Principle Component Analysis (PCA) technique. In this contribution we will provide an overview of the most common reduction techniques, then critically assess the pros and cons of the methods that have gained most popularity in recent years. Examples will be provided for plasmas in argon and carbon dioxide.

7. Modeling the Chemical Complexity in Titan's Atmosphere

Science.gov (United States)

Vuitton, Veronique; Yelle, Roger; Klippenstein, Stephen J.; Horst, Sarah; Lavvas, Panayotis

2018-06-01

Titan's atmospheric chemistry is extremely complicated because of the multiplicity of chemical as well as physical processes involved. Chemical processes begin with the dissociation and ionization of the most abundant species, N2 and CH4, by a variety of energy sources, i.e. solar UV and X-ray photons, suprathermal electrons (reactions involving radicals as well as positive and negative ions, all possibly in some excited electronic and vibrational state. Heterogeneous chemistry at the surface of the aerosols could also play a significant role. The efficiency and outcome of these reactions depends strongly on the physical characteristics of the atmosphere, namely pressure and temperature, ranging from 1.5×103 to 10-10 mbar and from 70 to 200 K, respectively. Moreover, the distribution of the species is affected by molecular diffusion and winds as well as escape from the top of the atmosphere and condensation in the lower stratosphere.Photochemical and microphysical models are the keystones of our understanding of Titan's atmospheric chemistry. Their main objective is to compute the distribution and nature of minor chemical species (typically containing up to 6 carbon atoms) and haze particles, respectively. Density profiles are compared to the available observations, allowing to identify important processes and to highlight those that remain to be constrained in the laboratory, experimentally and/or theoretically. We argue that positive ion chemistry is at the origin of complex organic molecules, such as benzene, ammonia and hydrogen isocyanide while neutral-neutral radiative association reactions are a significant source of alkanes. We find that negatively charged macromolecules (m/z ~100) attract the abundant positive ions, which ultimately leads to the formation of the aerosols. We also discuss the possibility that an incoming flux of oxygen from Enceladus, another Saturn's satellite, is responsible for the presence of oxygen-bearing species in Titan's reductive

8. Modelling the complex dynamics of vegetation, livestock and rainfall ...

African Journals Online (AJOL)

Open Access DOWNLOAD FULL TEXT ... In this paper, we present mathematical models that incorporate ideas from complex systems theory to integrate several strands of rangeland theory in a hierarchical framework. ... Keywords: catastrophe theory; complexity theory; disequilibrium; hysteresis; moving attractors

9. Oscillating shells: A model for a variable cosmic object

OpenAIRE

Nunez, Dario

1997-01-01

A model for a possible variable cosmic object is presented. The model consists of a massive shell surrounding a compact object. The gravitational and self-gravitational forces tend to collapse the shell, but the internal tangential stresses oppose the collapse. The combined action of the two types of forces is studied and several cases are presented. In particular, we investigate the spherically symmetric case in which the shell oscillates radially around a central compact object.

10. Sparse modeling of spatial environmental variables associated with asthma.

Science.gov (United States)

Chang, Timothy S; Gangnon, Ronald E; David Page, C; Buckingham, William R; Tandias, Aman; Cowan, Kelly J; Tomasallo, Carrie D; Arndt, Brian G; Hanrahan, Lawrence P; Guilbert, Theresa W

2015-02-01

Geographically distributed environmental factors influence the burden of diseases such as asthma. Our objective was to identify sparse environmental variables associated with asthma diagnosis gathered from a large electronic health record (EHR) dataset while controlling for spatial variation. An EHR dataset from the University of Wisconsin's Family Medicine, Internal Medicine and Pediatrics Departments was obtained for 199,220 patients aged 5-50years over a three-year period. Each patient's home address was geocoded to one of 3456 geographic census block groups. Over one thousand block group variables were obtained from a commercial database. We developed a Sparse Spatial Environmental Analysis (SASEA). Using this method, the environmental variables were first dimensionally reduced with sparse principal component analysis. Logistic thin plate regression spline modeling was then used to identify block group variables associated with asthma from sparse principal components. The addresses of patients from the EHR dataset were distributed throughout the majority of Wisconsin's geography. Logistic thin plate regression spline modeling captured spatial variation of asthma. Four sparse principal components identified via model selection consisted of food at home, dog ownership, household size, and disposable income variables. In rural areas, dog ownership and renter occupied housing units from significant sparse principal components were associated with asthma. Our main contribution is the incorporation of sparsity in spatial modeling. SASEA sequentially added sparse principal components to Logistic thin plate regression spline modeling. This method allowed association of geographically distributed environmental factors with asthma using EHR and environmental datasets. SASEA can be applied to other diseases with environmental risk factors. Copyright © 2014 Elsevier Inc. All rights reserved.

11. Resolving the Complex Genetic Basis of Phenotypic Variation and Variability of Cellular Growth.

Science.gov (United States)

Ziv, Naomi; Shuster, Bentley M; Siegal, Mark L; Gresham, David

2017-07-01

In all organisms, the majority of traits vary continuously between individuals. Explaining the genetic basis of quantitative trait variation requires comprehensively accounting for genetic and nongenetic factors as well as their interactions. The growth of microbial cells can be characterized by a lag duration, an exponential growth phase, and a stationary phase. Parameters that characterize these growth phases can vary among genotypes (phenotypic variation), environmental conditions (phenotypic plasticity), and among isogenic cells in a given environment (phenotypic variability). We used a high-throughput microscopy assay to map genetic loci determining variation in lag duration and exponential growth rate in growth rate-limiting and nonlimiting glucose concentrations, using segregants from a cross of two natural isolates of the budding yeast, Saccharomyces cerevisiae We find that some quantitative trait loci (QTL) are common between traits and environments whereas some are unique, exhibiting gene-by-environment interactions. Furthermore, whereas variation in the central tendency of growth rate or lag duration is explained by many additive loci, differences in phenotypic variability are primarily the result of genetic interactions. We used bulk segregant mapping to increase QTL resolution by performing whole-genome sequencing of complex mixtures of an advanced intercross mapping population grown in selective conditions using glucose-limited chemostats. We find that sequence variation in the high-affinity glucose transporter HXT7 contributes to variation in growth rate and lag duration. Allele replacements of the entire locus, as well as of a single polymorphic amino acid, reveal that the effect of variation in HXT7 depends on genetic, and allelic, background. Amplifications of HXT7 are frequently selected in experimental evolution in glucose-limited environments, but we find that HXT7 amplifications result in antagonistic pleiotropy that is absent in naturally

12. Analysis models for variables associated with breastfeeding duration

Directory of Open Access Journals (Sweden)

Edson Theodoro dos S. Neto

2013-09-01

Full Text Available OBJECTIVE To analyze the factors associated with breastfeeding duration by two statistical models. METHODS A population-based cohort study was conducted with 86 mothers and newborns from two areas primary covered by the National Health System, with high rates of infant mortality in Vitória, Espírito Santo, Brazil. During 30 months, 67 (78% children and mothers were visited seven times at home by trained interviewers, who filled out survey forms. Data on food and sucking habits, socioeconomic and maternal characteristics were collected. Variables were analyzed by Cox regression models, considering duration of breastfeeding as the dependent variable, and logistic regression (dependent variables, was the presence of a breastfeeding child in different post-natal ages. RESULTS In the logistic regression model, the pacifier sucking (adjusted Odds Ratio: 3.4; 95%CI 1.2-9.55 and bottle feeding (adjusted Odds Ratio: 4.4; 95%CI 1.6-12.1 increased the chance of weaning a child before one year of age. Variables associated to breastfeeding duration in the Cox regression model were: pacifier sucking (adjusted Hazard Ratio 2.0; 95%CI 1.2-3.3 and bottle feeding (adjusted Hazard Ratio 2.0; 95%CI 1.2-3.5. However, protective factors (maternal age and family income differed between both models. CONCLUSIONS Risk and protective factors associated with cessation of breastfeeding may be analyzed by different models of statistical regression. Cox Regression Models are adequate to analyze such factors in longitudinal studies.

13. Two-step variable selection in quantile regression models

Directory of Open Access Journals (Sweden)

FAN Yali

2015-06-01

Full Text Available We propose a two-step variable selection procedure for high dimensional quantile regressions, in which the dimension of the covariates, pn is much larger than the sample size n. In the first step, we perform ℓ1 penalty, and we demonstrate that the first step penalized estimator with the LASSO penalty can reduce the model from an ultra-high dimensional to a model whose size has the same order as that of the true model, and the selected model can cover the true model. The second step excludes the remained irrelevant covariates by applying the adaptive LASSO penalty to the reduced model obtained from the first step. Under some regularity conditions, we show that our procedure enjoys the model selection consistency. We conduct a simulation study and a real data analysis to evaluate the finite sample performance of the proposed approach.

14. Generative complexity of Gray-Scott model

Science.gov (United States)

2018-03-01

In the Gray-Scott reaction-diffusion system one reactant is constantly fed in the system, another reactant is reproduced by consuming the supplied reactant and also converted to an inert product. The rate of feeding one reactant in the system and the rate of removing another reactant from the system determine configurations of concentration profiles: stripes, spots, waves. We calculate the generative complexity-a morphological complexity of concentration profiles grown from a point-wise perturbation of the medium-of the Gray-Scott system for a range of the feeding and removal rates. The morphological complexity is evaluated using Shannon entropy, Simpson diversity, approximation of Lempel-Ziv complexity, and expressivity (Shannon entropy divided by space-filling). We analyse behaviour of the systems with highest values of the generative morphological complexity and show that the Gray-Scott systems expressing highest levels of the complexity are composed of the wave-fragments (similar to wave-fragments in sub-excitable media) and travelling localisations (similar to quasi-dissipative solitons and gliders in Conway's Game of Life).

15. A subsurface model of the beaver meadow complex

Science.gov (United States)

Nash, C.; Grant, G.; Flinchum, B. A.; Lancaster, J.; Holbrook, W. S.; Davis, L. G.; Lewis, S.

2015-12-01

Wet meadows are a vital component of arid and semi-arid environments. These valley spanning, seasonally inundated wetlands provide critical habitat and refugia for wildlife, and may potentially mediate catchment-scale hydrology in otherwise "water challenged" landscapes. In the last 150 years, these meadows have begun incising rapidly, causing the wetlands to drain and much of the ecological benefit to be lost. The mechanisms driving this incision are poorly understood, with proposed means ranging from cattle grazing to climate change, to the removal of beaver. There is considerable interest in identifying cost-effective strategies to restore the hydrologic and ecological conditions of these meadows at a meaningful scale, but effective process based restoration first requires a thorough understanding of the constructional history of these ubiquitous features. There is emerging evidence to suggest that the North American beaver may have had a considerable role in shaping this landscape through the building of dams. This "beaver meadow complex hypothesis" posits that as beaver dams filled with fine-grained sediments, they became large wet meadows on which new dams, and new complexes, were formed, thereby aggrading valley bottoms. A pioneering study done in Yellowstone indicated that 32-50% of the alluvial sediment was deposited in ponded environments. The observed aggradation rates were highly heterogeneous, suggesting spatial variability in the depositional process - all consistent with the beaver meadow complex hypothesis (Polvi and Wohl, 2012). To expand on this initial work, we have probed deeper into these meadow complexes using a combination of geophysical techniques, coring methods and numerical modeling to create a 3-dimensional representation of the subsurface environments. This imaging has given us a unique view into the patterns and processes responsible for the landforms, and may shed further light on the role of beaver in shaping these landscapes.

16. Ensembling Variable Selectors by Stability Selection for the Cox Model

Directory of Open Access Journals (Sweden)

Qing-Yan Yin

2017-01-01

Full Text Available As a pivotal tool to build interpretive models, variable selection plays an increasingly important role in high-dimensional data analysis. In recent years, variable selection ensembles (VSEs have gained much interest due to their many advantages. Stability selection (Meinshausen and Bühlmann, 2010, a VSE technique based on subsampling in combination with a base algorithm like lasso, is an effective method to control false discovery rate (FDR and to improve selection accuracy in linear regression models. By adopting lasso as a base learner, we attempt to extend stability selection to handle variable selection problems in a Cox model. According to our experience, it is crucial to set the regularization region Λ in lasso and the parameter λmin properly so that stability selection can work well. To the best of our knowledge, however, there is no literature addressing this problem in an explicit way. Therefore, we first provide a detailed procedure to specify Λ and λmin. Then, some simulated and real-world data with various censoring rates are used to examine how well stability selection performs. It is also compared with several other variable selection approaches. Experimental results demonstrate that it achieves better or competitive performance in comparison with several other popular techniques.

17. Complex cooperative breeders: Using infant care costs to explain variability in callitrichine social and reproductive behavior.

Science.gov (United States)

Díaz-Muñoz, Samuel L

2016-03-01

The influence of ecology on social behavior and mating strategies is one of the central questions in behavioral ecology and primatology. Callitrichines are New World primates that exhibit high behavioral variability, which is widely acknowledged, but not always systematically researched. Here, I examine the hypothesis that differences in the cost of infant care among genera help explain variation in reproductive traits. I present an integrative approach to generate and evaluate predictions from this hypothesis. I first identify callitrichine traits that vary minimally and traits that are more flexible (e.g., have greater variance or norm of reaction), including the number of males that mate with a breeding female, mechanisms of male reproductive competition, number of natal young retained, and the extent of female reproductive suppression. I outline how these more labile traits should vary along a continuum of infant care costs according to individual reproductive strategies. At one end of the spectrum, I predict that groups with higher infant care costs will show multiple adult males mating and providing infant care, high subordinate female reproductive suppression, few natal individuals delaying dispersal, and increased reproductive output by the dominant female -with opposite predictions under low infant costs. I derive an estimate of the differences in ecological and physiological infant care costs that suggest an order of ascending costs in the wild: Cebuella, Callithrix, Mico, Callimico, Saguinus, and Leontopithecus. I examine the literature on each genus for the most variable traits and evaluate a) where they fall along the continuum of infant care costs according to their reproductive strategies, and b) whether these costs correspond to the ecophysiological estimates of infant care costs. I conclude that infant care costs can provide a unifying explanation for the most variable reproductive traits among callitrichine genera. The approach presented can be

18. Evaluating two model reduction approaches for large scale hedonic models sensitive to omitted variables and multicollinearity

DEFF Research Database (Denmark)

Panduro, Toke Emil; Thorsen, Bo Jellesmark

2014-01-01

Hedonic models in environmental valuation studies have grown in terms of number of transactions and number of explanatory variables. We focus on the practical challenge of model reduction, when aiming for reliable parsimonious models, sensitive to omitted variable bias and multicollinearity. We...

19. Hidden Markov latent variable models with multivariate longitudinal data.

Science.gov (United States)

Song, Xinyuan; Xia, Yemao; Zhu, Hongtu

2017-03-01

Cocaine addiction is chronic and persistent, and has become a major social and health problem in many countries. Existing studies have shown that cocaine addicts often undergo episodic periods of addiction to, moderate dependence on, or swearing off cocaine. Given its reversible feature, cocaine use can be formulated as a stochastic process that transits from one state to another, while the impacts of various factors, such as treatment received and individuals' psychological problems on cocaine use, may vary across states. This article develops a hidden Markov latent variable model to study multivariate longitudinal data concerning cocaine use from a California Civil Addict Program. The proposed model generalizes conventional latent variable models to allow bidirectional transition between cocaine-addiction states and conventional hidden Markov models to allow latent variables and their dynamic interrelationship. We develop a maximum-likelihood approach, along with a Monte Carlo expectation conditional maximization (MCECM) algorithm, to conduct parameter estimation. The asymptotic properties of the parameter estimates and statistics for testing the heterogeneity of model parameters are investigated. The finite sample performance of the proposed methodology is demonstrated by simulation studies. The application to cocaine use study provides insights into the prevention of cocaine use. © 2016, The International Biometric Society.

20. A broadband variable-temperature test system for complex permittivity measurements of solid and powder materials

Science.gov (United States)

Zhang, Yunpeng; Li, En; Zhang, Jing; Yu, Chengyong; Zheng, Hu; Guo, Gaofeng

2018-02-01

A microwave test system to measure the complex permittivity of solid and powder materials as a function of temperature has been developed. The system is based on a TM0n0 multi-mode cylindrical cavity with a slotting structure, which provides purer test modes compared to a traditional cavity. To ensure the safety, effectiveness, and longevity, heating and testing are carried out separately and the sample can move between two functional areas through an Alundum tube. Induction heating and a pneumatic platform are employed to, respectively, shorten the heating and cooling time of the sample. The single trigger function of the vector network analyzer is added to test software to suppress the drift of the resonance peak during testing. Complex permittivity is calculated by the rigorous field theoretical solution considering multilayer media loading. The variation of the cavity equivalent radius caused by the sample insertion holes is discussed in detail, and its influence to the test result is analyzed. The calibration method for the complex permittivity of the Alundum tube and quartz vial (for loading powder sample), which vary with the temperature, is given. The feasibility of the system has been verified by measuring different samples in a wide range of relative permittivity and loss tangent, and variable-temperature test results of fused quartz and SiO2 powder up to 1500 °C are compared with published data. The results indicate that the presented system is reliable and accurate. The stability of the system is verified by repeated and long-term tests, and error analysis is presented to estimate the error incurred due to the uncertainties in different error sources.

1. Structural variability in uranyl-lanthanide hetero-metallic complexes with DOTA and oxalato ligands

International Nuclear Information System (INIS)

Thuery, P.

2009-01-01

Four novel 4f-5f hetero-metallic complexes could be obtained from the reaction of uranyl and lanthanide nitrates with DOTA (H 4 L) under hydrothermal conditions. In all cases, as in the previous examples reported, additional oxalato ligands are formed in situ. Variations in the stoichiometry of the final products and the presence of hydroxo ions in some cases appear to result in a large structural variability. In the two isomorphous complexes [(UO 2 ) 2 Ln 2 (L) 2 (C 2 O 4 )] with Ln = Sm(1) or Eu(2), the lanthanide ion is located in the N 4 O 4 site and is also bound to a carboxylate oxygen atom from a neighbouring unit, to give zigzag chains which are further linked to one another by [(UO 2 ) 2 (C 2 O 4 )] 2+ di-cations, resulting in the formation of a 3D framework. In [(UO 2 ) 4 Gd 2 (L) 2 (C 2 O 4 ) 3 (H 2 O) 6 ].2H 2 O (3), 2D bilayer subunits of the 'double floor' type with uranyl oxalate pillars are assembled into a 3D framework by other, disordered uranyl ions. [(UO 2 ) 2 Gd(L)(C 2 O 4 )(OH)].H 2 O (4) is a 2D assembly in which cationic {[(UO 2 ) 2 (C 2 O 4 )(OH)] + } n chains are linked to one another by the [Gd(L)] - groups. The most notable feature of this compound is the environment of the 4f ion, which is eight-coordinate and twisted square anti-prismatic (TSA'), instead of nine-coordinate mono-capped square anti-prismatic (SA), as generally observed in DOTA complexes of gadolinium(III) and similarly-sized ions. (author)

2. Leveraging Understanding of Flow of Variable Complex Fluid to Design Better Absorbent Hygiene Products

Science.gov (United States)

Krautkramer, C.; Rend, R. R.

2014-12-01

Menstrual flow, which is a result of shedding of uterus endometrium, occurs periodically in sync with a women's hormonal cycle. Management of this flow while allowing women to pursue their normal daily lives is the purpose of many commercial products. Some of these products, e.g. feminine hygiene pads and tampons, utilize porous materials in achieving their goal. In this paper we will demonstrate different phenomena that have been observed in flow of menstrual fluid through these porous materials, share some of the advances made in experimental and analytical study of these phenomena, and also present some of the unsolved challenges and difficulties encountered while studying this kind of flow. Menstrual fluid is generally composed of four main components: blood plasma, blood cells, cervical mucus, and tissue debris. This non-homogeneous, multiphase fluid displays very complex rheological behavior, e. g., yield stress, thixotropy, and visco-elasticity, that varies throughout and between menstrual cycles and among women due to various factors. Flow rates are also highly variable during menstruation and across the population and the rheological properties of the fluid change during the flow into and through the product. In addition to these phenomena, changes to the structure of the porous medium within the product can also be seen due to fouling and/or swelling of the material. This paper will, also, share how the fluid components impact the flow and the consequences for computer simulation, the creation of a simulant fluid and testing methods, and for designing products that best meet consumer needs. We hope to bring to light the challenges of managing this complex flow to meet a basic need of women all over the world. An opportunity exists to apply learnings from research in other disciplines to improve the scientific knowledge related to the flow of this complex fluid through the porous medium that is a sanitary product.

3. Model atmospheres with periodic shocks. [pulsations and mass loss in variable stars

Science.gov (United States)

Bowen, G. H.

1989-01-01

The pulsation of a long-period variable star generates shock waves which dramatically affect the structure of the star's atmosphere and produce conditions that lead to rapid mass loss. Numerical modeling of atmospheres with periodic shocks is being pursued to study the processes involved and the evolutionary consequences for the stars. It is characteristic of these complex dynamical systems that most effects result from the interaction of various time-dependent processes.

4. Using Models to Inform Policy: Insights from Modeling the Complexities of Global Polio Eradication

Science.gov (United States)

Thompson, Kimberly M.

Drawing on over 20 years of experience modeling risks in complex systems, this talk will challenge SBP participants to develop models that provide timely and useful answers to critical policy questions when decision makers need them. The talk will include reflections on the opportunities and challenges associated with developing integrated models for complex problems and communicating their results effectively. Dr. Thompson will focus the talk largely on collaborative modeling related to global polio eradication and the application of system dynamics tools. After successful global eradication of wild polioviruses, live polioviruses will still present risks that could potentially lead to paralytic polio cases. This talk will present the insights of efforts to use integrated dynamic, probabilistic risk, decision, and economic models to address critical policy questions related to managing global polio risks. Using a dynamic disease transmission model combined with probabilistic model inputs that characterize uncertainty for a stratified world to account for variability, we find that global health leaders will face some difficult choices, but that they can take actions that will manage the risks effectively. The talk will emphasize the need for true collaboration between modelers and subject matter experts, and the importance of working with decision makers as partners to ensure the development of useful models that actually get used.

5. Study on dynamic behavior of fusion reactor materials and their response to variable and complex irradiation environment

International Nuclear Information System (INIS)

Abe, K.; Kohyama, A.; Namba, C.; Wiffen, F.W.; Jones, R.H.

2001-01-01

A Japan-USA Program of irradiation experiments for fusion research, 'JUPITER', has been established as a 6 year program from 1995 to 2000. The goal is to study the dynamic behavior of fusion reactor materials and their response to variable and complex irradiation environment using fission reactors. The irradiation experiments in this program include low activation structural materials, functional ceramics and other innovative materials. The experimental data are analyzed by theoretical modeling and computer simulation to integrate the above effects. The irradiation capsules for in-situ measurement and varying temperature were developed successfully. It was found that insulating ceramics were worked up to 3 dpa. The property changes and related issues in low activation structural materials were summarized. (author)

6. Pre-quantum mechanics. Introduction to models with hidden variables

International Nuclear Information System (INIS)

Grea, J.

1976-01-01

Within the context of formalism of hidden variable type, the author considers the models used to describe mechanical systems before the introduction of the quantum model. An account is given of the characteristics of the theoretical models and their relationships with experimental methodology. The models of analytical, pre-ergodic, stochastic and thermodynamic mechanics are studied in succession. At each stage the physical hypothesis is enunciated by postulate corresponding to the type of description of the reality of the model. Starting from this postulate, the physical propositions which are meaningful for the model under consideration are defined and their logical structure is indicated. It is then found that on passing from one level of description to another, one can obtain successively Boolean lattices embedded in lattices of continuous geometric type, which are themselves embedded in Boolean lattices. It is therefore possible to envisage a more detailed description than that given by the quantum lattice and to construct it by analogy. (Auth.)

7. An Atmospheric Variability Model for Venus Aerobraking Missions

Science.gov (United States)

Tolson, Robert T.; Prince, Jill L. H.; Konopliv, Alexander A.

2013-01-01

Aerobraking has proven to be an enabling technology for planetary missions to Mars and has been proposed to enable low cost missions to Venus. Aerobraking saves a significant amount of propulsion fuel mass by exploiting atmospheric drag to reduce the eccentricity of the initial orbit. The solar arrays have been used as the primary drag surface and only minor modifications have been made in the vehicle design to accommodate the relatively modest aerothermal loads. However, if atmospheric density is highly variable from orbit to orbit, the mission must either accept higher aerothermal risk, a slower pace for aerobraking, or a tighter corridor likely with increased propulsive cost. Hence, knowledge of atmospheric variability is of great interest for the design of aerobraking missions. The first planetary aerobraking was at Venus during the Magellan mission. After the primary Magellan science mission was completed, aerobraking was used to provide a more circular orbit to enhance gravity field recovery. Magellan aerobraking took place between local solar times of 1100 and 1800 hrs, and it was found that the Venusian atmospheric density during the aerobraking phase had less than 10% 1 sigma orbit to orbit variability. On the other hand, at some latitudes and seasons, Martian variability can be as high as 40% 1 sigmaFrom both the MGN and PVO mission it was known that the atmosphere, above aerobraking altitudes, showed greater variability at night, but this variability was never quantified in a systematic manner. This paper proposes a model for atmospheric variability that can be used for aerobraking mission design until more complete data sets become available.

8. A new approach for modelling variability in residential construction projects

Directory of Open Access Journals (Sweden)

2013-06-01

Full Text Available The construction industry is plagued by long cycle times caused by variability in the supply chain. Variations or undesirable situations are the result of factors such as non-standard practices, work site accidents, inclement weather conditions and faults in design. This paper uses a new approach for modelling variability in construction by linking relative variability indicators to processes. Mass homebuilding sector was chosen as the scope of the analysis because data is readily available. Numerous simulation experiments were designed by varying size of capacity buffers in front of trade contractors, availability of trade contractors, and level of variability in homebuilding processes. The measurements were shown to lead to an accurate determination of relationships between these factors and production parameters. The variability indicator was found to dramatically affect the tangible performance measures such as home completion rates. This study provides for future analysis of the production homebuilding sector, which may lead to improvements in performance and a faster product delivery to homebuyers.

9. A new approach for modelling variability in residential construction projects

Directory of Open Access Journals (Sweden)

2013-06-01

Full Text Available The construction industry is plagued by long cycle times caused by variability in the supply chain. Variations or undesirable situations are the result of factors such as non-standard practices, work site accidents, inclement weather conditions and faults in design. This paper uses a new approach for modelling variability in construction by linking relative variability indicators to processes. Mass homebuilding sector was chosen as the scope of the analysis because data is readily available. Numerous simulation experiments were designed by varying size of capacity buffers in front of trade contractors, availability of trade contractors, and level of variability in homebuilding processes. The measurements were shown to lead to an accurate determination of relationships between these factors and production parameters. The variability indicator was found to dramatically affect the tangible performance measures such as home completion rates. This study provides for future analysis of the production homebuilding sector, which may lead to improvements in performance and a faster product delivery to homebuyers.

10. Multiscale thermohydrologic model: addressing variability and uncertainty at Yucca Mountain

International Nuclear Information System (INIS)

Buscheck, T; Rosenberg, N D; Gansemer, J D; Sun, Y

2000-01-01

Performance assessment and design evaluation require a modeling tool that simultaneously accounts for processes occurring at a scale of a few tens of centimeters around individual waste packages and emplacement drifts, and also on behavior at the scale of the mountain. Many processes and features must be considered, including non-isothermal, multiphase-flow in rock of variable saturation and thermal radiation in open cavities. Also, given the nature of the fractured rock at Yucca Mountain, a dual-permeability approach is needed to represent permeability. A monolithic numerical model with all these features requires too large a computational cost to be an effective simulation tool, one that is used to examine sensitivity to key model assumptions and parameters. We have developed a multi-scale modeling approach that effectively simulates 3D discrete-heat-source, mountain-scale thermohydrologic behavior at Yucca Mountain and captures the natural variability of the site consistent with what we know from site characterization and waste-package-to-waste-package variability in heat output. We describe this approach and present results examining the role of infiltration flux, the most important natural-system parameter with respect to how thermohydrologic behavior influences the performance of the repository

11. Modeling Complex Workflow in Molecular Diagnostics

Science.gov (United States)

Gomah, Mohamed E.; Turley, James P.; Lu, Huimin; Jones, Dan

2010-01-01

One of the hurdles to achieving personalized medicine has been implementing the laboratory processes for performing and reporting complex molecular tests. The rapidly changing test rosters and complex analysis platforms in molecular diagnostics have meant that many clinical laboratories still use labor-intensive manual processing and testing without the level of automation seen in high-volume chemistry and hematology testing. We provide here a discussion of design requirements and the results of implementation of a suite of lab management tools that incorporate the many elements required for use of molecular diagnostics in personalized medicine, particularly in cancer. These applications provide the functionality required for sample accessioning and tracking, material generation, and testing that are particular to the evolving needs of individualized molecular diagnostics. On implementation, the applications described here resulted in improvements in the turn-around time for reporting of more complex molecular test sets, and significant changes in the workflow. Therefore, careful mapping of workflow can permit design of software applications that simplify even the complex demands of specialized molecular testing. By incorporating design features for order review, software tools can permit a more personalized approach to sample handling and test selection without compromising efficiency. PMID:20007844

12. Complex systems modeling by cellular automata

NARCIS (Netherlands)

Kroc, J.; Sloot, P.M.A.; Rabuñal Dopico, J.R.; Dorado de la Calle, J.; Pazos Sierra, A.

2009-01-01

In recent years, the notion of complex systems proved to be a very useful concept to define, describe, and study various natural phenomena observed in a vast number of scientific disciplines. Examples of scientific disciplines that highly benefit from this concept range from physics, mathematics,

13. Modeling pitch perception of complex tones

NARCIS (Netherlands)

Houtsma, A.J.M.

1986-01-01

When one listens to a series of harmonic complex tones that have no acoustic energy at their fundamental frequencies, one usually still hears a melody that corresponds to those missing fundamentals. Since it has become evident some two decades ago that neither Helmholtz's difference tone theory nor

14. Multi-level emulation of complex climate model responses to boundary forcing data

Science.gov (United States)

Tran, Giang T.; Oliver, Kevin I. C.; Holden, Philip B.; Edwards, Neil R.; Sóbester, András; Challenor, Peter

2018-04-01

Climate model components involve both high-dimensional input and output fields. It is desirable to efficiently generate spatio-temporal outputs of these models for applications in integrated assessment modelling or to assess the statistical relationship between such sets of inputs and outputs, for example, uncertainty analysis. However, the need for efficiency often compromises the fidelity of output through the use of low complexity models. Here, we develop a technique which combines statistical emulation with a dimensionality reduction technique to emulate a wide range of outputs from an atmospheric general circulation model, PLASIM, as functions of the boundary forcing prescribed by the ocean component of a lower complexity climate model, GENIE-1. Although accurate and detailed spatial information on atmospheric variables such as precipitation and wind speed is well beyond the capability of GENIE-1's energy-moisture balance model of the atmosphere, this study demonstrates that the output of this model is useful in predicting PLASIM's spatio-temporal fields through multi-level emulation. Meaningful information from the fast model, GENIE-1 was extracted by utilising the correlation between variables of the same type in the two models and between variables of different types in PLASIM. We present here the construction and validation of several PLASIM variable emulators and discuss their potential use in developing a hybrid model with statistical components.

15. Input variable selection for data-driven models of Coriolis flowmeters for two-phase flow measurement

International Nuclear Information System (INIS)

Wang, Lijuan; Yan, Yong; Wang, Xue; Wang, Tao

2017-01-01

Input variable selection is an essential step in the development of data-driven models for environmental, biological and industrial applications. Through input variable selection to eliminate the irrelevant or redundant variables, a suitable subset of variables is identified as the input of a model. Meanwhile, through input variable selection the complexity of the model structure is simplified and the computational efficiency is improved. This paper describes the procedures of the input variable selection for the data-driven models for the measurement of liquid mass flowrate and gas volume fraction under two-phase flow conditions using Coriolis flowmeters. Three advanced input variable selection methods, including partial mutual information (PMI), genetic algorithm-artificial neural network (GA-ANN) and tree-based iterative input selection (IIS) are applied in this study. Typical data-driven models incorporating support vector machine (SVM) are established individually based on the input candidates resulting from the selection methods. The validity of the selection outcomes is assessed through an output performance comparison of the SVM based data-driven models and sensitivity analysis. The validation and analysis results suggest that the input variables selected from the PMI algorithm provide more effective information for the models to measure liquid mass flowrate while the IIS algorithm provides a fewer but more effective variables for the models to predict gas volume fraction. (paper)

16. Explicit estimating equations for semiparametric generalized linear latent variable models

KAUST Repository

Ma, Yanyuan

2010-07-05

We study generalized linear latent variable models without requiring a distributional assumption of the latent variables. Using a geometric approach, we derive consistent semiparametric estimators. We demonstrate that these models have a property which is similar to that of a sufficient complete statistic, which enables us to simplify the estimating procedure and explicitly to formulate the semiparametric estimating equations. We further show that the explicit estimators have the usual root n consistency and asymptotic normality. We explain the computational implementation of our method and illustrate the numerical performance of the estimators in finite sample situations via extensive simulation studies. The advantage of our estimators over the existing likelihood approach is also shown via numerical comparison. We employ the method to analyse a real data example from economics. © 2010 Royal Statistical Society.

17. Speech-discrimination scores modeled as a binomial variable.

Science.gov (United States)

Thornton, A R; Raffin, M J

1978-09-01

Many studies have reported variability data for tests of speech discrimination, and the disparate results of these studies have not been given a simple explanation. Arguments over the relative merits of 25- vs 50-word tests have ignored the basic mathematical properties inherent in the use of percentage scores. The present study models performance on clinical tests of speech discrimination as a binomial variable. A binomial model was developed, and some of its characteristics were tested against data from 4120 scores obtained on the CID Auditory Test W-22. A table for determining significant deviations between scores was generated and compared to observed differences in half-list scores for the W-22 tests. Good agreement was found between predicted and observed values. Implications of the binomial characteristics of speech-discrimination scores are discussed.

18. Efficient family-based model checking via variability abstractions

DEFF Research Database (Denmark)

Dimovski, Aleksandar; Al-Sibahi, Ahmad Salim; Brabrand, Claus

2016-01-01

with the abstract model checking of the concrete high-level variational model. This allows the use of Spin with all its accumulated optimizations for efficient verification of variational models without any knowledge about variability. We have implemented the transformations in a prototype tool, and we illustrate......Many software systems are variational: they can be configured to meet diverse sets of requirements. They can produce a (potentially huge) number of related systems, known as products or variants, by systematically reusing common parts. For variational models (variational systems or families...... of related systems), specialized family-based model checking algorithms allow efficient verification of multiple variants, simultaneously, in a single run. These algorithms, implemented in a tool Snip, scale much better than the brute force'' approach, where all individual systems are verified using...

19. Predictive modeling and reducing cyclic variability in autoignition engines

Science.gov (United States)

Hellstrom, Erik; Stefanopoulou, Anna; Jiang, Li; Larimore, Jacob

2016-08-30

Methods and systems are provided for controlling a vehicle engine to reduce cycle-to-cycle combustion variation. A predictive model is applied to predict cycle-to-cycle combustion behavior of an engine based on observed engine performance variables. Conditions are identified, based on the predicted cycle-to-cycle combustion behavior, that indicate high cycle-to-cycle combustion variation. Corrective measures are then applied to prevent the predicted high cycle-to-cycle combustion variation.

20. Are revised models better models? A skill score assessment of regional interannual variability

Science.gov (United States)

Sperber, Kenneth R.; Participating AMIP Modelling Groups

1999-05-01

Various skill scores are used to assess the performance of revised models relative to their original configurations. The interannual variability of all-India, Sahel and Nordeste rainfall and summer monsoon windshear is examined in integrations performed under the experimental design of the Atmospheric Model Intercomparison Project. For the indices considered, the revised models exhibit greater fidelity at simulating the observed interannual variability. Interannual variability of all-India rainfall is better simulated by models that have a more realistic rainfall climatology in the vicinity of India, indicating the beneficial effect of reducing systematic model error.

1. Variable sound speed in interacting dark energy models

Science.gov (United States)

Linton, Mark S.; Pourtsidou, Alkistis; Crittenden, Robert; Maartens, Roy

2018-04-01

We consider a self-consistent and physical approach to interacting dark energy models described by a Lagrangian, and identify a new class of models with variable dark energy sound speed. We show that if the interaction between dark energy in the form of quintessence and cold dark matter is purely momentum exchange this generally leads to a dark energy sound speed that deviates from unity. Choosing a specific sub-case, we study its phenomenology by investigating the effects of the interaction on the cosmic microwave background and linear matter power spectrum. We also perform a global fitting of cosmological parameters using CMB data, and compare our findings to ΛCDM.

2. Computational Fluid Dynamics Modeling of a Supersonic Nozzle and Integration into a Variable Cycle Engine Model

Science.gov (United States)

Connolly, Joseph W.; Friedlander, David; Kopasakis, George

2015-01-01

This paper covers the development of an integrated nonlinear dynamic simulation for a variable cycle turbofan engine and nozzle that can be integrated with an overall vehicle Aero-Propulso-Servo-Elastic (APSE) model. A previously developed variable cycle turbofan engine model is used for this study and is enhanced here to include variable guide vanes allowing for operation across the supersonic flight regime. The primary focus of this study is to improve the fidelity of the model's thrust response by replacing the simple choked flow equation convergent-divergent nozzle model with a MacCormack method based quasi-1D model. The dynamic response of the nozzle model using the MacCormack method is verified by comparing it against a model of the nozzle using the conservation element/solution element method. A methodology is also presented for the integration of the MacCormack nozzle model with the variable cycle engine.

3. The utility of Earth system Models of Intermediate Complexity

NARCIS (Netherlands)

Weber, S.L.

2010-01-01

Intermediate-complexity models are models which describe the dynamics of the atmosphere and/or ocean in less detail than conventional General Circulation Models (GCMs). At the same time, they go beyond the approach taken by atmospheric Energy Balance Models (EBMs) or ocean box models by

4. Advances in dynamic network modeling in complex transportation systems

CERN Document Server

Ukkusuri, Satish V

2013-01-01

This book focuses on the latest in dynamic network modeling, including route guidance and traffic control in transportation systems and other complex infrastructure networks. Covers dynamic traffic assignment, flow modeling, mobile sensor deployment and more.

5. Short-term to seasonal variability in factors driving primary productivity in a shallow estuary: Implications for modeling production

Science.gov (United States)

Canion, Andy; MacIntyre, Hugh L.; Phipps, Scott

2013-10-01

The inputs of primary productivity models may be highly variable on short timescales (hourly to daily) in turbid estuaries, but modeling of productivity in these environments is often implemented with data collected over longer timescales. Daily, seasonal, and spatial variability in primary productivity model parameters: chlorophyll a concentration (Chla), the downwelling light attenuation coefficient (kd), and photosynthesis-irradiance response parameters (Pmchl, αChl) were characterized in Weeks Bay, a nitrogen-impacted shallow estuary in the northern Gulf of Mexico. Variability in primary productivity model parameters in response to environmental forcing, nutrients, and microalgal taxonomic marker pigments were analysed in monthly and short-term datasets. Microalgal biomass (as Chla) was strongly related to total phosphorus concentration on seasonal scales. Hourly data support wind-driven resuspension as a major source of short-term variability in Chla and light attenuation (kd). The empirical relationship between areal primary productivity and a combined variable of biomass and light attenuation showed that variability in the photosynthesis-irradiance response contributed little to the overall variability in primary productivity, and Chla alone could account for 53-86% of the variability in primary productivity. Efforts to model productivity in similar shallow systems with highly variable microalgal biomass may benefit the most by investing resources in improving spatial and temporal resolution of chlorophyll a measurements before increasing the complexity of models used in productivity modeling.

6. Quantifying intrinsic and extrinsic variability in stochastic gene expression models.

Science.gov (United States)

2013-01-01

Genetically identical cell populations exhibit considerable intercellular variation in the level of a given protein or mRNA. Both intrinsic and extrinsic sources of noise drive this variability in gene expression. More specifically, extrinsic noise is the expression variability that arises from cell-to-cell differences in cell-specific factors such as enzyme levels, cell size and cell cycle stage. In contrast, intrinsic noise is the expression variability that is not accounted for by extrinsic noise, and typically arises from the inherent stochastic nature of biochemical processes. Two-color reporter experiments are employed to decompose expression variability into its intrinsic and extrinsic noise components. Analytical formulas for intrinsic and extrinsic noise are derived for a class of stochastic gene expression models, where variations in cell-specific factors cause fluctuations in model parameters, in particular, transcription and/or translation rate fluctuations. Assuming mRNA production occurs in random bursts, transcription rate is represented by either the burst frequency (how often the bursts occur) or the burst size (number of mRNAs produced in each burst). Our analysis shows that fluctuations in the transcription burst frequency enhance extrinsic noise but do not affect the intrinsic noise. On the contrary, fluctuations in the transcription burst size or mRNA translation rate dramatically increase both intrinsic and extrinsic noise components. Interestingly, simultaneous fluctuations in transcription and translation rates arising from randomness in ATP abundance can decrease intrinsic noise measured in a two-color reporter assay. Finally, we discuss how these formulas can be combined with single-cell gene expression data from two-color reporter experiments for estimating model parameters.

7. Narrowing the gap between network models and real complex systems

OpenAIRE

Viamontes Esquivel, Alcides

2014-01-01

Simple network models that focus only on graph topology or, at best, basic interactions are often insufficient to capture all the aspects of a dynamic complex system. In this thesis, I explore those limitations, and some concrete methods of resolving them. I argue that, in order to succeed at interpreting and influencing complex systems, we need to take into account  slightly more complex parts, interactions and information flows in our models.This thesis supports that affirmation with five a...

8. Identifying Variability in Mental Models Within and Between Disciplines Caring for the Cardiac Surgical Patient.

Science.gov (United States)

Brown, Evans K H; Harder, Kathleen A; Apostolidou, Ioanna; Wahr, Joyce A; Shook, Douglas C; Farivar, R Saeid; Perry, Tjorvi E; Konia, Mojca R

2017-07-01

The cardiac operating room is a complex environment requiring efficient and effective communication between multiple disciplines. The objectives of this study were to identify and rank critical time points during the perioperative care of cardiac surgical patients, and to assess variability in responses, as a correlate of a shared mental model, regarding the importance of these time points between and within disciplines. Using Delphi technique methodology, panelists from 3 institutions were tasked with developing a list of critical time points, which were subsequently assigned to pause point (PP) categories. Panelists then rated these PPs on a 100-point visual analog scale. Descriptive statistics were expressed as percentages, medians, and interquartile ranges (IQRs). We defined low response variability between panelists as an IQR ≤ 20, moderate response variability as an IQR > 20 and ≤ 40, and high response variability as an IQR > 40. Panelists identified a total of 12 PPs. The PPs identified by the highest number of panelists were (1) before surgical incision, (2) before aortic cannulation, (3) before cardiopulmonary bypass (CPB) initiation, (4) before CPB separation, and (5) at time of transfer of care from operating room (OR) to intensive care unit (ICU) staff. There was low variability among panelists' ratings of the PP "before surgical incision," moderate response variability for the PPs "before separation from CPB," "before transfer from OR table to bed," and "at time of transfer of care from OR to ICU staff," and high response variability for the remaining 8 PPs. In addition, the perceived importance of each of these PPs varies between disciplines and between institutions. Cardiac surgical providers recognize distinct critical time points during cardiac surgery. However, there is a high degree of variability within and between disciplines as to the importance of these times, suggesting an absence of a shared mental model among disciplines caring for

9. Modeling key processes causing climate change and variability

Energy Technology Data Exchange (ETDEWEB)

Henriksson, S.

2013-09-01

Greenhouse gas warming, internal climate variability and aerosol climate effects are studied and the importance to understand these key processes and being able to separate their influence on the climate is discussed. Aerosol-climate model ECHAM5-HAM and the COSMOS millennium model consisting of atmospheric, ocean and carbon cycle and land-use models are applied and results compared to measurements. Topics at focus are climate sensitivity, quasiperiodic variability with a period of 50-80 years and variability at other timescales, climate effects due to aerosols over India and climate effects of northern hemisphere mid- and high-latitude volcanic eruptions. The main findings of this work are (1) pointing out the remaining challenges in reducing climate sensitivity uncertainty from observational evidence, (2) estimates for the amplitude of a 50-80 year quasiperiodic oscillation in global mean temperature ranging from 0.03 K to 0.17 K and for its phase progression as well as the synchronising effect of external forcing, (3) identifying a power law shape S(f) {proportional_to} f-{alpha} for the spectrum of global mean temperature with {alpha} {approx} 0.8 between multidecadal and El Nino timescales with a smaller exponent in modelled climate without external forcing, (4) separating aerosol properties and climate effects in India by season and location (5) the more efficient dispersion of secondary sulfate aerosols than primary carbonaceous aerosols in the simulations, (6) an increase in monsoon rainfall in northern India due to aerosol light absorption and a probably larger decrease due to aerosol dimming effects and (7) an estimate of mean maximum cooling of 0.19 K due to larger northern hemisphere mid- and high-latitude volcanic eruptions. The results could be applied or useful in better isolating the human-caused climate change signal, in studying the processes further and in more detail, in decadal climate prediction, in model evaluation and in emission policy

10. Complex accident scenarios modelled and analysed by Stochastic Petri Nets

International Nuclear Information System (INIS)

Nývlt, Ondřej; Haugen, Stein; Ferkl, Lukáš

2015-01-01

This paper is focused on the usage of Petri nets for an effective modelling and simulation of complicated accident scenarios, where an order of events can vary and some events may occur anywhere in an event chain. These cases are hardly manageable by traditional methods as event trees – e.g. one pivotal event must be often inserted several times into one branch of the tree. Our approach is based on Stochastic Petri Nets with Predicates and Assertions and on an idea, which comes from the area of Programmable Logic Controllers: an accidental scenario is described as a net of interconnected blocks, which represent parts of the scenario. So the scenario is firstly divided into parts, which are then modelled by Petri nets. Every block can be easily interconnected with other blocks by input/output variables to create complex ones. In the presented approach, every event or a part of a scenario is modelled only once, independently on a number of its occurrences in the scenario. The final model is much more transparent then the corresponding event tree. The method is shown in two case studies, where the advanced one contains a dynamic behavior. - Highlights: • Event & Fault trees have problems with scenarios where an order of events can vary. • Paper presents a method for modelling and analysis of dynamic accident scenarios. • The presented method is based on Petri nets. • The proposed method solves mentioned problems of traditional approaches. • The method is shown in two case studies: simple and advanced (with dynamic behavior)

11. Linking Inflammation, Cardiorespiratory Variability, and Neural Control in Acute Inflammation via Computational Modeling.

Science.gov (United States)

Dick, Thomas E; Molkov, Yaroslav I; Nieman, Gary; Hsieh, Yee-Hsee; Jacono, Frank J; Doyle, John; Scheff, Jeremy D; Calvano, Steve E; Androulakis, Ioannis P; An, Gary; Vodovotz, Yoram

2012-01-01

Acute inflammation leads to organ failure by engaging catastrophic feedback loops in which stressed tissue evokes an inflammatory response and, in turn, inflammation damages tissue. Manifestations of this maladaptive inflammatory response include cardio-respiratory dysfunction that may be reflected in reduced heart rate and ventilatory pattern variabilities. We have developed signal-processing algorithms that quantify non-linear deterministic characteristics of variability in biologic signals. Now, coalescing under the aegis of the NIH Computational Biology Program and the Society for Complexity in Acute Illness, two research teams performed iterative experiments and computational modeling on inflammation and cardio-pulmonary dysfunction in sepsis as well as on neural control of respiration and ventilatory pattern variability. These teams, with additional collaborators, have recently formed a multi-institutional, interdisciplinary consortium, whose goal is to delineate the fundamental interrelationship between the inflammatory response and physiologic variability. Multi-scale mathematical modeling and complementary physiological experiments will provide insight into autonomic neural mechanisms that may modulate the inflammatory response to sepsis and simultaneously reduce heart rate and ventilatory pattern variabilities associated with sepsis. This approach integrates computational models of neural control of breathing and cardio-respiratory coupling with models that combine inflammation, cardiovascular function, and heart rate variability. The resulting integrated model will provide mechanistic explanations for the phenomena of respiratory sinus-arrhythmia and cardio-ventilatory coupling observed under normal conditions, and the loss of these properties during sepsis. This approach holds the potential of modeling cross-scale physiological interactions to improve both basic knowledge and clinical management of acute inflammatory diseases such as sepsis and trauma.

12. Early days in complex dynamics a history of complex dynamics in one variable during 1906-1942

CERN Document Server

Alexander, Daniel S; Rosa, Alessandro

2011-01-01

The theory of complex dynamics, whose roots lie in 19th-century studies of the iteration of complex function conducted by Kœnigs, Schröder, and others, flourished remarkably during the first half of the 20th century, when many of the central ideas and techniques of the subject developed. This book by Alexander, Iavernaro, and Rosa paints a robust picture of the field of complex dynamics between 1906 and 1942 through detailed discussions of the work of Fatou, Julia, Siegel, and several others. A recurrent theme of the authors' treatment is the center problem in complex dynamics. They present its complete history during this period and, in so doing, bring out analogies between complex dynamics and the study of differential equations, in particular, the problem of stability in Hamiltonian systems. Among these analogies are the use of iteration and problems involving small divisors which the authors examine in the work of Poincaré and others, linking them to complex dynamics, principally via the work of Samuel...

13. Uncertainty and Complexity in Mathematical Modeling

Science.gov (United States)

Cannon, Susan O.; Sanders, Mark

2017-01-01

Modeling is an effective tool to help students access mathematical concepts. Finding a math teacher who has not drawn a fraction bar or pie chart on the board would be difficult, as would finding students who have not been asked to draw models and represent numbers in different ways. In this article, the authors will discuss: (1) the properties of…

14. Information, complexity and efficiency: The automobile model

Energy Technology Data Exchange (ETDEWEB)

Allenby, B. [Lucent Technologies (United States)]|[Lawrence Livermore National Lab., CA (United States)

1996-08-08

The new, rapidly evolving field of industrial ecology - the objective, multidisciplinary study of industrial and economic systems and their linkages with fundamental natural systems - provides strong ground for believing that a more environmentally and economically efficient economy will be more information intensive and complex. Information and intellectual capital will be substituted for the more traditional inputs of materials and energy in producing a desirable, yet sustainable, quality of life. While at this point this remains a strong hypothesis, the evolution of the automobile industry can be used to illustrate how such substitution may, in fact, already be occurring in an environmentally and economically critical sector.

15. Complex response of white pines to past environmental variability increases understanding of future vulnerability.

Directory of Open Access Journals (Sweden)

Virginia Iglesias

Full Text Available Ecological niche models predict plant responses to climate change by circumscribing species distributions within a multivariate environmental framework. Most projections based on modern bioclimatic correlations imply that high-elevation species are likely to be extirpated from their current ranges as a result of rising growing-season temperatures in the coming decades. Paleoecological data spanning the last 15,000 years from the Greater Yellowstone region describe the response of vegetation to past climate variability and suggest that white pines, a taxon of special concern in the region, have been surprisingly resilient to high summer temperature and fire activity in the past. Moreover, the fossil record suggests that winter conditions and biotic interactions have been critical limiting variables for high-elevation conifers in the past and will likely be so in the future. This long-term perspective offers insights on species responses to a broader range of climate and associated ecosystem changes than can be observed at present and should be part of resource management and conservation planning for the future.

16. Carbon-water Cycling in the Critical Zone: Understanding Ecosystem Process Variability Across Complex Terrain

Energy Technology Data Exchange (ETDEWEB)

Barnard, Holly [Univ. of Colorado, Boulder, CO (United States); Brooks, Paul [Univ. of Utah, Salt Lake City, UT (United States); Univ. of Arizona, Tucson, AZ (United States)

2016-06-16

One of the largest knowledge gaps in environmental science is the ability to understand and predict how ecosystems will respond to future climate variability. The links between vegetation, hydrology, and climate that control carbon sequestration in plant biomass and soils remain poorly understood. Soil respiration is the second largest carbon flux of terrestrial ecosystems, yet there is no consensus on how respiration will change as water availability and temperature co-vary. To address this knowledge gap, we use the variation in soil development and topography across an elevation and climate gradient on the Front Range of Colorado to conduct a natural experiment that enables us to examine the co-evolution of soil carbon, vegetation, hydrology, and climate in an accessible field laboratory. The goal of this project is to further our ability to combine plant water availability, carbon flux and storage, and topographically driven hydrometrics into a watershed scale predictive model of carbon balance. We hypothesize: (i) landscape structure and hydrology are important controls on soil respiration as a result of spatial variability in both physical and biological drivers: (ii) variation in rates of soil respiration during the growing season is due to corresponding shifts in belowground carbon inputs from vegetation; and (iii) aboveground carbon storage (biomass) and species composition are directly correlated with soil moisture and therefore, can be directly related to subsurface drainage patterns.

17. Major histocompatibility complex harbors widespread genotypic variability of non-additive risk of rheumatoid arthritis including epistasis.

Science.gov (United States)

Wei, Wen-Hua; Bowes, John; Plant, Darren; Viatte, Sebastien; Yarwood, Annie; Massey, Jonathan; Worthington, Jane; Eyre, Stephen

2016-04-25

Genotypic variability based genome-wide association studies (vGWASs) can identify potentially interacting loci without prior knowledge of the interacting factors. We report a two-stage approach to make vGWAS applicable to diseases: firstly using a mixed model approach to partition dichotomous phenotypes into additive risk and non-additive environmental residuals on the liability scale and secondly using the Levene's (Brown-Forsythe) test to assess equality of the residual variances across genotype groups per marker. We found widespread significant (P 5e-05) vGWAS signals within the major histocompatibility complex (MHC) across all three study cohorts of rheumatoid arthritis. We further identified 10 epistatic interactions between the vGWAS signals independent of the MHC additive effects, each with a weak effect but jointly explained 1.9% of phenotypic variance. PTPN22 was also identified in the discovery cohort but replicated in only one independent cohort. Combining the three cohorts boosted power of vGWAS and additionally identified TYK2 and ANKRD55. Both PTPN22 and TYK2 had evidence of interactions reported elsewhere. We conclude that vGWAS can help discover interacting loci for complex diseases but require large samples to find additional signals.

18. Some considerations concerning the challenge of incorporating social variables into epidemiological models of infectious disease transmission.

Science.gov (United States)

Barnett, Tony; Fournié, Guillaume; Gupta, Sunetra; Seeley, Janet

2015-01-01

Incorporation of 'social' variables into epidemiological models remains a challenge. Too much detail and models cease to be useful; too little and the very notion of infection - a highly social process in human populations - may be considered with little reference to the social. The French sociologist Émile Durkheim proposed that the scientific study of society required identification and study of 'social currents'. Such 'currents' are what we might today describe as 'emergent properties', specifiable variables appertaining to individuals and groups, which represent the perspectives of social actors as they experience the environment in which they live their lives. Here we review the ways in which one particular emergent property, hope, relevant to a range of epidemiological situations, might be used in epidemiological modelling of infectious diseases in human populations. We also indicate how such an approach might be extended to include a range of other potential emergent properties to represent complex social and economic processes bearing on infectious disease transmission.

19. Educational complex of light-colored modeling of urban environment

Directory of Open Access Journals (Sweden)

2018-01-01

Full Text Available Mechanisms, methodological tools and structure of a training complex of light-colored modeling of the urban environment are developed in this paper. The following results of the practical work of students are presented: light composition and installation, media facades, lighting of building facades, city streets and embankment. As a result of modeling, the structure of the light form is determined. Light-transmitting materials and causing characteristic optical illusions, light-visual and light-dynamic effects (video-dynamics and photostatics, basic compositional techniques of light form are revealed. The main elements of the light installation are studied, including a light projection, an electronic device, interactivity and relationality of the installation, and the mechanical device which becomes a part of the installation composition. The meaning of modern media facade technology is the transformation of external building structures and their facades into a changing information cover, into a media content translator using LED technology. Light tectonics and the light rhythm of the plastics of the architectural object are built up through point and local illumination, modeling of the urban ensemble assumes the structural interaction of several light building models with special light-composition techniques. When modeling the social and pedestrian environment, the lighting parameters depend on the scale of the chosen space and are adapted taking into account the visual perception of the pedestrian, and the atmospheric effects of comfort and safety of the environment are achieved with the help of special light compositional techniques. With the aim of realizing the tasks of light modeling, a methodology has been created, including the mechanisms of models, variability and complementarity. The perspectives of light modeling in the context of structural elements of the city, neuropsychology, wireless and bioluminescence technologies are proposed

20. Geochemical Modeling Of F Area Seepage Basin Composition And Variability

International Nuclear Information System (INIS)

Millings, M.; Denham, M.; Looney, B.

2012-01-01

From the 1950s through 1989, the F Area Seepage Basins at the Savannah River Site (SRS) received low level radioactive wastes resulting from processing nuclear materials. Discharges of process wastes to the F Area Seepage Basins followed by subsequent mixing processes within the basins and eventual infiltration into the subsurface resulted in contamination of the underlying vadose zone and downgradient groundwater. For simulating contaminant behavior and subsurface transport, a quantitative understanding of the interrelated discharge-mixing-infiltration system along with the resulting chemistry of fluids entering the subsurface is needed. An example of this need emerged as the F Area Seepage Basins was selected as a key case study demonstration site for the Advanced Simulation Capability for Environmental Management (ASCEM) Program. This modeling evaluation explored the importance of the wide variability in bulk wastewater chemistry as it propagated through the basins. The results are intended to generally improve and refine the conceptualization of infiltration of chemical wastes from seepage basins receiving variable waste streams and to specifically support the ASCEM case study model for the F Area Seepage Basins. Specific goals of this work included: (1) develop a technically-based 'charge-balanced' nominal source term chemistry for water infiltrating into the subsurface during basin operations, (2) estimate the nature of short term and long term variability in infiltrating water to support scenario development for uncertainty quantification (i.e., UQ analysis), (3) identify key geochemical factors that control overall basin water chemistry and the projected variability/stability, and (4) link wastewater chemistry to the subsurface based on monitoring well data. Results from this study provide data and understanding that can be used in further modeling efforts of the F Area groundwater plume. As identified in this study, key geochemical factors affecting basin

1. Modelling the Spatial Isotope Variability of Precipitation in Syria

Energy Technology Data Exchange (ETDEWEB)

Kattan, Z.; Kattaa, B. [Department of Geology, Atomic Energy Commission of Syria (AECS), Damascus (Syrian Arab Republic)

2013-07-15

Attempts were made to model the spatial variability of environmental isotope ({sup 18}O, {sup 2}H and {sup 3}H) compositions of precipitation in syria. Rainfall samples periodically collected on a monthly basis from 16 different stations were used for processing and demonstrating the spatial distributions of these isotopes, together with those of deuterium excess (d) values. Mathematically, the modelling process was based on applying simple polynomial models that take into consideration the effects of major geographic factors (Lon.E., Lat.N., and altitude). The modelling results of spatial distribution of stable isotopes ({sup 18}O and {sup 2}H) were generally good, as shown from the high correlation coefficients (R{sup 2} = 0.7-0.8), calculated between the observed and predicted values. In the case of deuterium excess and tritium distributions, the results were most likely approximates (R{sup 2} = 0.5-0.6). Improving the simulation of spatial isotope variability probably requires the incorporation of other local meteorological factors, such as relative air humidity, precipitation amount and vapour pressure, which are supposed to play an important role in such an arid country. (author)

2. Modeling Power Systems as Complex Adaptive Systems

Energy Technology Data Exchange (ETDEWEB)

Chassin, David P.; Malard, Joel M.; Posse, Christian; Gangopadhyaya, Asim; Lu, Ning; Katipamula, Srinivas; Mallow, J V.

2004-12-30

Physical analogs have shown considerable promise for understanding the behavior of complex adaptive systems, including macroeconomics, biological systems, social networks, and electric power markets. Many of today's most challenging technical and policy questions can be reduced to a distributed economic control problem. Indeed, economically based control of large-scale systems is founded on the conjecture that the price-based regulation (e.g., auctions, markets) results in an optimal allocation of resources and emergent optimal system control. This report explores the state-of-the-art physical analogs for understanding the behavior of some econophysical systems and deriving stable and robust control strategies for using them. We review and discuss applications of some analytic methods based on a thermodynamic metaphor, according to which the interplay between system entropy and conservation laws gives rise to intuitive and governing global properties of complex systems that cannot be otherwise understood. We apply these methods to the question of how power markets can be expected to behave under a variety of conditions.

3. Linking Complexity and Sustainability Theories: Implications for Modeling Sustainability Transitions

Directory of Open Access Journals (Sweden)

Camaren Peter

2014-03-01

Full Text Available In this paper, we deploy a complexity theory as the foundation for integration of different theoretical approaches to sustainability and develop a rationale for a complexity-based framework for modeling transitions to sustainability. We propose a framework based on a comparison of complex systems’ properties that characterize the different theories that deal with transitions to sustainability. We argue that adopting a complexity theory based approach for modeling transitions requires going beyond deterministic frameworks; by adopting a probabilistic, integrative, inclusive and adaptive approach that can support transitions. We also illustrate how this complexity-based modeling framework can be implemented; i.e., how it can be used to select modeling techniques that address particular properties of complex systems that we need to understand in order to model transitions to sustainability. In doing so, we establish a complexity-based approach towards modeling sustainability transitions that caters for the broad range of complex systems’ properties that are required to model transitions to sustainability.

4. Seychelles Dome variability in a high resolution ocean model

Science.gov (United States)

Nyadjro, E. S.; Jensen, T.; Richman, J. G.; Shriver, J. F.

2016-02-01

The Seychelles-Chagos Thermocline Ridge (SCTR; 5ºS-10ºS, 50ºE-80ºE) in the tropical Southwest Indian Ocean (SWIO) has been recognized as a region of prominence with regards to climate variability in the Indian Ocean. Convective activities in this region have regional consequences as it affect socio-economic livelihood of the people especially in the countries along the Indian Ocean rim. The SCTR is characterized by a quasi-permanent upwelling that is often associated with thermocline shoaling. This upwelling affects sea surface temperature (SST) variability. We present results on the variability and dynamics of the SCTR as simulated by the 1/12º high resolution HYbrid Coordinate Ocean Model (HYCOM). It is observed that locally, wind stress affects SST via Ekman pumping of cooler subsurface waters, mixing and anomalous zonal advection. Remotely, wind stress curl in the eastern equatorial Indian Ocean generates westward-propagating Rossby waves that impacts the depth of the thermocline which in turn impacts SST variability in the SCTR region. The variability of the contributions of these processes, especially with regard to the Indian Ocean Dipole (IOD) are further examined. In a typical positive IOD (PIOD) year, the net vertical velocity in the SCTR is negative year-round as easterlies along the region are intensified leading to a strong positive curl. This vertical velocity is caused mainly by anomalous local Ekman downwelling (with peak during September-November), a direct opposite to the climatology scenario when local Ekman pumping is positive (upwelling favorable) year-round. The anomalous remote contribution to the vertical velocity changes is minimal especially during the developing and peak stages of PIOD events. In a typical negative IOD (NIOD) year, anomalous vertical velocity is positive almost year-round with peaks in May and October. The remote contribution is positive, in contrast to the climatology and most of the PIOD years.

5. Shared Variable Oriented Parallel Precompiler for SPMD Model

Institute of Scientific and Technical Information of China (English)

1995-01-01

For the moment,commercial parallel computer systems with distributed memory architecture are usually provided with parallel FORTRAN or parallel C compliers,which are just traditional sequential FORTRAN or C compilers expanded with communication statements.Programmers suffer from writing parallel programs with communication statements. The Shared Variable Oriented Parallel Precompiler (SVOPP) proposed in this paper can automatically generate appropriate communication statements based on shared variables for SPMD(Single Program Multiple Data) computation model and greatly ease the parallel programming with high communication efficiency.The core function of parallel C precompiler has been successfully verified on a transputer-based parallel computer.Its prominent performance shows that SVOPP is probably a break-through in parallel programming technique.

6. Geospatial models of climatological variables distribution over Colombian territory

International Nuclear Information System (INIS)

Baron Leguizamon, Alicia

2003-01-01

Diverse studies have dealt on the existing relation between the variables temperature about the air and precipitation with the altitude; nevertheless they have been precise analyses or by regions, but no of them has gotten to constitute itself in a tool that reproduces the space distribution, of the temperature or the precipitation, taking into account orography and allowing to obtain from her data on these variables in a certain place. Cradle in the raised relation and from the multi-annual monthly information of the temperature of the air and the precipitation, it was calculated the vertical gradients of temperature and the related the precipitation to the altitude. After it, with base in the data of altitude provided by the DEM, one calculated the values of temperature and precipitation, and those values were interpolated to generate geospatial models monthly

7. Adaptation of endothelial cells to physiologically-modeled, variable shear stress.

Directory of Open Access Journals (Sweden)

Joseph S Uzarski

Full Text Available Endothelial cell (EC function is mediated by variable hemodynamic shear stress patterns at the vascular wall, where complex shear stress profiles directly correlate with blood flow conditions that vary temporally based on metabolic demand. The interactions of these more complex and variable shear fields with EC have not been represented in hemodynamic flow models. We hypothesized that EC exposed to pulsatile shear stress that changes in magnitude and duration, modeled directly from real-time physiological variations in heart rate, would elicit phenotypic changes as relevant to their critical roles in thrombosis, hemostasis, and inflammation. Here we designed a physiological flow (PF model based on short-term temporal changes in blood flow observed in vivo and compared it to static culture and steady flow (SF at a fixed pulse frequency of 1.3 Hz. Results show significant changes in gene regulation as a function of temporally variable flow, indicating a reduced wound phenotype more representative of quiescence. EC cultured under PF exhibited significantly higher endothelial nitric oxide synthase (eNOS activity (PF: 176.0±11.9 nmol/10(5 EC; SF: 115.0±12.5 nmol/10(5 EC, p = 0.002 and lower TNF-a-induced HL-60 leukocyte adhesion (PF: 37±6 HL-60 cells/mm(2; SF: 111±18 HL-60/mm(2, p = 0.003 than cells cultured under SF which is consistent with a more quiescent anti-inflammatory and anti-thrombotic phenotype. In vitro models have become increasingly adept at mimicking natural physiology and in doing so have clarified the importance of both chemical and physical cues that drive cell function. These data illustrate that the variability in metabolic demand and subsequent changes in perfusion resulting in constantly variable shear stress plays a key role in EC function that has not previously been described.

8. Mathematical modeling and optimization of complex structures

CERN Document Server

Repin, Sergey; Tuovinen, Tero

2016-01-01

This volume contains selected papers in three closely related areas: mathematical modeling in mechanics, numerical analysis, and optimization methods. The papers are based upon talks presented  on the International Conference for Mathematical Modeling and Optimization in Mechanics, held in Jyväskylä, Finland, March 6-7, 2014 dedicated to Prof. N. Banichuk on the occasion of his 70th birthday. The articles are written by well-known scientists working in computational mechanics and in optimization of complicated technical models. Also, the volume contains papers discussing the historical development, the state of the art, new ideas, and open problems arising in  modern continuum mechanics and applied optimization problems. Several papers are concerned with mathematical problems in numerical analysis, which are also closely related to important mechanical models. The main topics treated include:  * Computer simulation methods in mechanics, physics, and biology;  * Variational problems and methods; minimiz...

9. Hierarchical Models of the Nearshore Complex System

National Research Council Canada - National Science Library

2004-01-01

.... This grant was termination funding for the Werner group, specifically aimed at finishing up and publishing research related to synoptic imaging of near shore bathymetry, testing models for beach cusp...

10. Integrated Modeling of Complex Optomechanical Systems

Science.gov (United States)

Andersen, Torben; Enmark, Anita

2011-09-01

Mathematical modeling and performance simulation are playing an increasing role in large, high-technology projects. There are two reasons; first, projects are now larger than they were before, and the high cost calls for detailed performance prediction before construction. Second, in particular for space-related designs, it is often difficult to test systems under realistic conditions beforehand, and mathematical modeling is then needed to verify in advance that a system will work as planned. Computers have become much more powerful, permitting calculations that were not possible before. At the same time mathematical tools have been further developed and found acceptance in the community. Particular progress has been made in the fields of structural mechanics, optics and control engineering, where new methods have gained importance over the last few decades. Also, methods for combining optical, structural and control system models into global models have found widespread use. Such combined models are usually called integrated models and were the subject of this symposium. The objective was to bring together people working in the fields of groundbased optical telescopes, ground-based radio telescopes, and space telescopes. We succeeded in doing so and had 39 interesting presentations and many fruitful discussions during coffee and lunch breaks and social arrangements. We are grateful that so many top ranked specialists found their way to Kiruna and we believe that these proceedings will prove valuable during much future work.

11. Some Comparisons of Complexity in Dictionary-Based and Linear Computational Models

Czech Academy of Sciences Publication Activity Database

Gnecco, G.; Kůrková, Věra; Sanguineti, M.

2011-01-01

Roč. 24, č. 2 (2011), s. 171-182 ISSN 0893-6080 R&D Project s: GA ČR GA201/08/1744 Grant - others:CNR - AV ČR project 2010-2012(XE) Complexity of Neural-Network and Kernel Computational Models Institutional research plan: CEZ:AV0Z10300504 Keywords : linear approximation schemes * variable-basis approximation schemes * model complexity * worst-case errors * neural networks * kernel models Subject RIV: IN - Informatics, Computer Science Impact factor: 2.182, year: 2011

12. Initial CGE Model Results Summary Exogenous and Endogenous Variables Tests

Energy Technology Data Exchange (ETDEWEB)

Edwards, Brian Keith [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Boero, Riccardo [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Rivera, Michael Kelly [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

2017-08-07

The following discussion presents initial results of tests of the most recent version of the National Infrastructure Simulation and Analysis Center Dynamic Computable General Equilibrium (CGE) model developed by Los Alamos National Laboratory (LANL). The intent of this is to test and assess the model’s behavioral properties. The test evaluated whether the predicted impacts are reasonable from a qualitative perspective. This issue is whether the predicted change, be it an increase or decrease in other model variables, is consistent with prior economic intuition and expectations about the predicted change. One of the purposes of this effort is to determine whether model changes are needed in order to improve its behavior qualitatively and quantitatively.

13. An empirical model for independent control of variable speed refrigeration system

International Nuclear Information System (INIS)

Li Hua; Jeong, Seok-Kwon; Yoon, Jung-In; You, Sam-Sang

2008-01-01

This paper deals with an empirical dynamic model for decoupling control of the variable speed refrigeration system (VSRS). To cope with inherent complexity and nonlinearity in system dynamics, the model parameters are first obtained based on experimental data. In the study, the dynamic characteristics of indoor temperature and superheat are assumed to be first-order model with time delay. While the compressor frequency and opening angle of electronic expansion valve are varying, the indoor temperature and the superheat exhibit interfering characteristics each other in the VSRS. Thus, each decoupling model has been proposed to eliminate such interference. Finally, the experiment and simulation results indicate that the proposed model offers more tractable means for describing the actual VSRS comparing to other models currently available

14. A New Integrated Weighted Model in SNOW-V10: Verification of Categorical Variables

Science.gov (United States)

Huang, Laura X.; Isaac, George A.; Sheng, Grant

2014-01-01

This paper presents the verification results for nowcasts of seven categorical variables from an integrated weighted model (INTW) and the underlying numerical weather prediction (NWP) models. Nowcasting, or short range forecasting (0-6 h), over complex terrain with sufficient accuracy is highly desirable but a very challenging task. A weighting, evaluation, bias correction and integration system (WEBIS) for generating nowcasts by integrating NWP forecasts and high frequency observations was used during the Vancouver 2010 Olympic and Paralympic Winter Games as part of the Science of Nowcasting Olympic Weather for Vancouver 2010 (SNOW-V10) project. Forecast data from Canadian high-resolution deterministic NWP system with three nested grids (at 15-, 2.5- and 1-km horizontal grid-spacing) were selected as background gridded data for generating the integrated nowcasts. Seven forecast variables of temperature, relative humidity, wind speed, wind gust, visibility, ceiling and precipitation rate are treated as categorical variables for verifying the integrated weighted forecasts. By analyzing the verification of forecasts from INTW and the NWP models among 15 sites, the integrated weighted model was found to produce more accurate forecasts for the 7 selected forecast variables, regardless of location. This is based on the multi-categorical Heidke skill scores for the test period 12 February to 21 March 2010.

15. Variable slip wind generator modeling for real-time simulation

Energy Technology Data Exchange (ETDEWEB)

Gagnon, R.; Brochu, J.; Turmel, G. [Hydro-Quebec, Varennes, PQ (Canada). IREQ

2006-07-01

A model of a wind turbine using a variable slip wound-rotor induction machine was presented. The model was created as part of a library of generic wind generator models intended for wind integration studies. The stator winding of the wind generator was connected directly to the grid and the rotor was driven by the turbine through a drive train. The variable resistors was synthesized by an external resistor in parallel with a diode rectifier. A forced-commutated power electronic device (IGBT) was connected to the wound rotor by slip rings and brushes. Simulations were conducted in a Matlab/Simulink environment using SimPowerSystems blocks to model power systems elements and Simulink blocks to model the turbine, control system and drive train. Detailed descriptions of the turbine, the drive train and the control system were provided. The model's implementation in the simulator was also described. A case study demonstrating the real-time simulation of a wind generator connected at the distribution level of a power system was presented. Results of the case study were then compared with results obtained from the SimPowerSystems off-line simulation. Results showed good agreement between the waveforms, demonstrating the conformity of the real-time and the off-line simulations. The capability of Hypersim for real-time simulation of wind turbines with power electronic converters in a distribution network was demonstrated. It was concluded that hardware-in-the-loop (HIL) simulation of wind turbine controllers for wind integration studies in power systems is now feasible. 5 refs., 1 tab., 6 figs.

16. Smart modeling and simulation for complex systems practice and theory

CERN Document Server

Ren, Fenghui; Zhang, Minjie; Ito, Takayuki; Tang, Xijin

2015-01-01

This book aims to provide a description of these new Artificial Intelligence technologies and approaches to the modeling and simulation of complex systems, as well as an overview of the latest scientific efforts in this field such as the platforms and/or the software tools for smart modeling and simulating complex systems. These tasks are difficult to accomplish using traditional computational approaches due to the complex relationships of components and distributed features of resources, as well as the dynamic work environments. In order to effectively model the complex systems, intelligent technologies such as multi-agent systems and smart grids are employed to model and simulate the complex systems in the areas of ecosystem, social and economic organization, web-based grid service, transportation systems, power systems and evacuation systems.

17. The sigma model on complex projective superspaces

Energy Technology Data Exchange (ETDEWEB)

Candu, Constantin; Mitev, Vladimir; Schomerus, Volker [DESY, Hamburg (Germany). Theory Group; Quella, Thomas [Amsterdam Univ. (Netherlands). Inst. for Theoretical Physics; Saleur, Hubert [CEA Saclay, 91 - Gif-sur-Yvette (France). Inst. de Physique Theorique; USC, Los Angeles, CA (United States). Physics Dept.

2009-08-15

The sigma model on projective superspaces CP{sup S-1} {sup vertical} {sup stroke} {sup S} gives rise to a continuous family of interacting 2D conformal field theories which are parametrized by the curvature radius R and the theta angle {theta}. Our main goal is to determine the spectrum of the model, non-perturbatively as a function of both parameters. We succeed to do so for all open boundary conditions preserving the full global symmetry of the model. In string theory parlor, these correspond to volume filling branes that are equipped with a monopole line bundle and connection. The paper consists of two parts. In the first part, we approach the problem within the continuum formulation. Combining combinatorial arguments with perturbative studies and some simple free field calculations, we determine a closed formula for the partition function of the theory. This is then tested numerically in the second part. There we propose a spin chain regularization of the CP{sup S-1} {sup vertical} {sup stroke} {sup S} model with open boundary conditions and use it to determine the spectrum at the conformal fixed point. The numerical results are in remarkable agreement with the continuum analysis. (orig.)

18. The sigma model on complex projective superspaces

International Nuclear Information System (INIS)

Candu, Constantin; Mitev, Vladimir; Schomerus, Volker; Quella, Thomas; Saleur, Hubert; USC, Los Angeles, CA

2009-08-01

The sigma model on projective superspaces CP S-1 vertical stroke S gives rise to a continuous family of interacting 2D conformal field theories which are parametrized by the curvature radius R and the theta angle θ. Our main goal is to determine the spectrum of the model, non-perturbatively as a function of both parameters. We succeed to do so for all open boundary conditions preserving the full global symmetry of the model. In string theory parlor, these correspond to volume filling branes that are equipped with a monopole line bundle and connection. The paper consists of two parts. In the first part, we approach the problem within the continuum formulation. Combining combinatorial arguments with perturbative studies and some simple free field calculations, we determine a closed formula for the partition function of the theory. This is then tested numerically in the second part. There we propose a spin chain regularization of the CP S-1 vertical stroke S model with open boundary conditions and use it to determine the spectrum at the conformal fixed point. The numerical results are in remarkable agreement with the continuum analysis. (orig.)

19. Estimation and variable selection for generalized additive partial linear models

KAUST Repository

Wang, Li

2011-08-01

We study generalized additive partial linear models, proposing the use of polynomial spline smoothing for estimation of nonparametric functions, and deriving quasi-likelihood based estimators for the linear parameters. We establish asymptotic normality for the estimators of the parametric components. The procedure avoids solving large systems of equations as in kernel-based procedures and thus results in gains in computational simplicity. We further develop a class of variable selection procedures for the linear parameters by employing a nonconcave penalized quasi-likelihood, which is shown to have an asymptotic oracle property. Monte Carlo simulations and an empirical example are presented for illustration. © Institute of Mathematical Statistics, 2011.

20. Context Tree Estimation in Variable Length Hidden Markov Models

OpenAIRE

Dumont, Thierry

2011-01-01

We address the issue of context tree estimation in variable length hidden Markov models. We propose an estimator of the context tree of the hidden Markov process which needs no prior upper bound on the depth of the context tree. We prove that the estimator is strongly consistent. This uses information-theoretic mixture inequalities in the spirit of Finesso and Lorenzo(Consistent estimation of the order for Markov and hidden Markov chains(1990)) and E.Gassiat and S.Boucheron (Optimal error exp...

1. Remote sensing of the Canadian Arctic: Modelling biophysical variables

Science.gov (United States)

Liu, Nanfeng

It is anticipated that Arctic vegetation will respond in a variety of ways to altered temperature and precipitation patterns expected with climate change, including changes in phenology, productivity, biomass, cover and net ecosystem exchange. Remote sensing provides data and data processing methodologies for monitoring and assessing Arctic vegetation over large areas. The goal of this research was to explore the potential of hyperspectral and high spatial resolution multispectral remote sensing data for modelling two important Arctic biophysical variables: Percent Vegetation Cover (PVC) and the fraction of Absorbed Photosynthetically Active Radiation (fAPAR). A series of field experiments were conducted to collect PVC and fAPAR at three Canadian Arctic sites: (1) Sabine Peninsula, Melville Island, NU; (2) Cape Bounty Arctic Watershed Observatory (CBAWO), Melville Island, NU; and (3) Apex River Watershed (ARW), Baffin Island, NU. Linear relationships between biophysical variables and Vegetation Indices (VIs) were examined at different spatial scales using field spectra (for the Sabine Peninsula site) and high spatial resolution satellite data (for the CBAWO and ARW sites). At the Sabine Peninsula site, hyperspectral VIs exhibited a better performance for modelling PVC than multispectral VIs due to their capacity for sampling fine spectral features. The optimal hyperspectral bands were located at important spectral features observed in Arctic vegetation spectra, including leaf pigment absorption in the red wavelengths and at the red-edge, leaf water absorption in the near infrared, and leaf cellulose and lignin absorption in the shortwave infrared. At the CBAWO and ARW sites, field PVC and fAPAR exhibited strong correlations (R2 > 0.70) with the NDVI (Normalized Difference Vegetation Index) derived from high-resolution WorldView-2 data. Similarly, high spatial resolution satellite-derived fAPAR was correlated to MODIS fAPAR (R2 = 0.68), with a systematic

2. Classification criteria of syndromes by latent variable models

DEFF Research Database (Denmark)

Petersen, Janne

2010-01-01

, although this is often desired. I have proposed a new method for predicting class membership that, in contrast to methods based on posterior probabilities of class membership, yields consistent estimates when regressed on explanatory variables in a subsequent analysis. There are four different basic models...... analyses. Part 1: HALS engages different phenotypic changes of peripheral lipoatrophy and central lipohypertrophy.  There are several different definitions of HALS and no consensus on the number of phenotypes. Many of the definitions consist of counting fulfilled criteria on markers and do not include...

3. Modeling intraindividual variability with repeated measures data methods and applications

CERN Document Server

Hershberger, Scott L

2013-01-01

This book examines how individuals behave across time and to what degree that behavior changes, fluctuates, or remains stable.It features the most current methods on modeling repeated measures data as reported by a distinguished group of experts in the field. The goal is to make the latest techniques used to assess intraindividual variability accessible to a wide range of researchers. Each chapter is written in a ""user-friendly"" style such that even the ""novice"" data analyst can easily apply the techniques.Each chapter features:a minimum discussion of mathematical detail;an empirical examp

4. A complex autoregressive model and application to monthly temperature forecasts

Directory of Open Access Journals (Sweden)

X. Gu

2005-11-01

Full Text Available A complex autoregressive model was established based on the mathematic derivation of the least squares for the complex number domain which is referred to as the complex least squares. The model is different from the conventional way that the real number and the imaginary number are separately calculated. An application of this new model shows a better forecast than forecasts from other conventional statistical models, in predicting monthly temperature anomalies in July at 160 meteorological stations in mainland China. The conventional statistical models include an autoregressive model, where the real number and the imaginary number are separately disposed, an autoregressive model in the real number domain, and a persistence-forecast model.

5. Understanding complex urban systems integrating multidisciplinary data in urban models

CERN Document Server

Gebetsroither-Geringer, Ernst; Atun, Funda; Werner, Liss

2016-01-01

This book is devoted to the modeling and understanding of complex urban systems. This second volume of Understanding Complex Urban Systems focuses on the challenges of the modeling tools, concerning, e.g., the quality and quantity of data and the selection of an appropriate modeling approach. It is meant to support urban decision-makers—including municipal politicians, spatial planners, and citizen groups—in choosing an appropriate modeling approach for their particular modeling requirements. The contributors to this volume are from different disciplines, but all share the same goal: optimizing the representation of complex urban systems. They present and discuss a variety of approaches for dealing with data-availability problems and finding appropriate modeling approaches—and not only in terms of computer modeling. The selection of articles featured in this volume reflect a broad variety of new and established modeling approaches such as: - An argument for using Big Data methods in conjunction with Age...

6. Variable recruitment fluidic artificial muscles: modeling and experiments

International Nuclear Information System (INIS)

Bryant, Matthew; Meller, Michael A; Garcia, Ephrahim

2014-01-01

We investigate taking advantage of the lightweight, compliant nature of fluidic artificial muscles to create variable recruitment actuators in the form of artificial muscle bundles. Several actuator elements at different diameter scales are packaged to act as a single actuator device. The actuator elements of the bundle can be connected to the fluidic control circuit so that different groups of actuator elements, much like individual muscle fibers, can be activated independently depending on the required force output and motion. This novel actuation concept allows us to save energy by effectively impedance matching the active size of the actuators on the fly based on the instantaneous required load. This design also allows a single bundled actuator to operate in substantially different force regimes, which could be valuable for robots that need to perform a wide variety of tasks and interact safely with humans. This paper proposes, models and analyzes the actuation efficiency of this actuator concept. The analysis shows that variable recruitment operation can create an actuator that reduces throttling valve losses to operate more efficiently over a broader range of its force–strain operating space. We also present preliminary results of the design, fabrication and experimental characterization of three such bioinspired variable recruitment actuator prototypes. (paper)

7. Quantifying uncertainty, variability and likelihood for ordinary differential equation models

LENUS (Irish Health Repository)

Weisse, Andrea Y

2010-10-28

Abstract Background In many applications, ordinary differential equation (ODE) models are subject to uncertainty or variability in initial conditions and parameters. Both, uncertainty and variability can be quantified in terms of a probability density function on the state and parameter space. Results The partial differential equation that describes the evolution of this probability density function has a form that is particularly amenable to application of the well-known method of characteristics. The value of the density at some point in time is directly accessible by the solution of the original ODE extended by a single extra dimension (for the value of the density). This leads to simple methods for studying uncertainty, variability and likelihood, with significant advantages over more traditional Monte Carlo and related approaches especially when studying regions with low probability. Conclusions While such approaches based on the method of characteristics are common practice in other disciplines, their advantages for the study of biological systems have so far remained unrecognized. Several examples illustrate performance and accuracy of the approach and its limitations.

8. Uncertainty importance measure for models with correlated normal variables

International Nuclear Information System (INIS)

Hao, Wenrui; Lu, Zhenzhou; Wei, Pengfei

2013-01-01

In order to explore the contributions by correlated input variables to the variance of the model output, the contribution decomposition of the correlated input variables based on Mara's definition is investigated in detail. By taking the quadratic polynomial output without cross term as an illustration, the solution of the contribution decomposition is derived analytically using the statistical inference theory. After the correction of the analytical solution is validated by the numerical examples, they are employed to two engineering examples to show their wide application. The derived analytical solutions can directly be used to recognize the contributions by the correlated input variables in case of the quadratic or linear polynomial output without cross term, and the analytical inference method can be extended to the case of higher order polynomial output. Additionally, the origins of the interaction contribution of the correlated inputs are analyzed, and the comparisons of the existing contribution indices are completed, on which the engineer can select the suitable indices to know the necessary information. At last, the degeneration of the correlated inputs to the uncorrelated ones and some computational issues are discussed in concept

9. Evaluating measurement models in clinical research: covariance structure analysis of latent variable models of self-conception.

Science.gov (United States)

Hoyle, R H

1991-02-01

Indirect measures of psychological constructs are vital to clinical research. On occasion, however, the meaning of indirect measures of psychological constructs is obfuscated by statistical procedures that do not account for the complex relations between items and latent variables and among latent variables. Covariance structure analysis (CSA) is a statistical procedure for testing hypotheses about the relations among items that indirectly measure a psychological construct and relations among psychological constructs. This article introduces clinical researchers to the strengths and limitations of CSA as a statistical procedure for conceiving and testing structural hypotheses that are not tested adequately with other statistical procedures. The article is organized around two empirical examples that illustrate the use of CSA for evaluating measurement models with correlated error terms, higher-order factors, and measured and latent variables.

10. Relations between segmental and motor variability in prosodically complex nonword sequences.

Science.gov (United States)

Goffman, Lisa; Gerken, Louann; Lucchesi, Julie

2007-04-01

To assess how prosodic prominence and hierarchical foot structure influence segmental and articulatory aspects of speech production, specifically segmental accuracy and variability, and oral movement trajectory variability. Thirty individuals participated: 10 young adults, 10 children who are normally developing, and 10 children diagnosed with specific language impairment. Segmental error and segmental variability and movement trajectory variability were compared in low and high prosodic prominence conditions (i.e., strong and weak syllables) and in different prosodic foot structures. Between-participants findings were that both groups of children showed more segmental error and segmental variability and more movement trajectory variability than did adults. A similar within-participant pattern of results was observed for all 3 groups. Prosodic prominence influenced both segmental and motor levels of analysis, with weak syllables produced less accurately and with more lip and jaw movement trajectory variability than strong syllables. However, hierarchical foot structure affected segmental but not motor measures of speech production accuracy and variability. Motor and segmental variables were not consistently aligned. This pattern of results has clinical implications because inferences about motor variability may not directly follow from observations of segmental variability.

11. How does complex terrain influence responses of carbon and water cycle processes to climate variability and climate change? (Invited)

Science.gov (United States)

Bond, B. J.; Peterson, K.; McKane, R.; Lajtha, K.; Quandt, D. J.; Allen, S. T.; Sell, S.; Daly, C.; Harmon, M. E.; Johnson, S. L.; Spies, T.; Sollins, P.; Abdelnour, A. G.; Stieglitz, M.

2010-12-01

We are pursuing the ambitious goal of understanding how complex terrain influences the responses of carbon and water cycle processes to climate variability and climate change. Our studies take place in H.J. Andrews Experimental Forest, an LTER (Long Term Ecological Research) site situated in Oregon’s central-western Cascade Range. Decades of long-term measurements and intensive research have revealed influences of topography on vegetation patterns, disturbance history, and hydrology. More recent research has shown surprising interactions between microclimates and synoptic weather patterns due to cold air drainage and pooling in mountain valleys. Using these data and insights, in addition to a recent LiDAR (Light Detection and Ranging) reconnaissance and a small sensor network, we are employing process-based models, including “SPA” (Soil-Plant-Atmosphere, developed by Mathew Williams of the University of Edinburgh), and “VELMA” (Visualizing Ecosystems for Land Management Alternatives, developed by Marc Stieglitz and colleagues of the Georgia Institute of Technology) to focus on two important features of mountainous landscapes: heterogeneity (both spatial and temporal) and connectivity (atmosphere-canopy-hillslope-stream). Our research questions include: 1) Do fine-scale spatial and temporal heterogeneity result in emergent properties at the basin scale, and if so, what are they? 2) How does connectivity across ecosystem components affect system responses to climate variability and change? Initial results show that for environmental drivers that elicit non-linear ecosystem responses on the plot scale, such as solar radiation, soil depth and soil water content, fine-scale spatial heterogeneity may produce unexpected emergent properties at larger scales. The results from such modeling experiments are necessarily a function of the supporting algorithms. However, comparisons based on models such as SPA and VELMA that operate at much different spatial scales

12. Fluid flow modeling in complex areas*, **

Directory of Open Access Journals (Sweden)

Poullet Pascal

2012-04-01

Full Text Available We show first results of 3D simulation of sea currents in a realistic context. We use the full Navier–Stokes equations for incompressible viscous fluid. The problem is solved using a second order incremental projection method associated with the finite volume of the staggered (MAC scheme for the spatial discretization. After validation on classical cases, it is used in a numerical simulation of the Pointe à Pitre harbour area. The use of the fictious domain method permits us to take into account the complexity of bathymetric data and allows us to work with regular meshes and thus preserves the efficiency essential for a 3D code. Dans cette étude, nous présentons les premiers résultats de simulation d’un écoulement d’un fluide incompressible visqueux dans un contexte environnemental réel. L’approche utilisée utilise une méthode de domaines fictifs pour une prise en compte d’un domaine physique tridimensionnel très irrégulier. Le schéma numérique combine un schéma de projection incrémentale et des volumes finis utilisant des volumes de contrôle adaptés à un maillage décalé. Les tests de validation sont menés pour les cas tests de la cavité double entraînée ainsi que l’écoulement dans un canal avec un obstacle placé de manière asymmétrique.

13. Structural identifiability of cyclic graphical models of biological networks with latent variables.

Science.gov (United States)

Wang, Yulin; Lu, Na; Miao, Hongyu

2016-06-13

Graphical models have long been used to describe biological networks for a variety of important tasks such as the determination of key biological parameters, and the structure of graphical model ultimately determines whether such unknown parameters can be unambiguously obtained from experimental observations (i.e., the identifiability problem). Limited by resources or technical capacities, complex biological networks are usually partially observed in experiment, which thus introduces latent variables into the corresponding graphical models. A number of previous studies have tackled the parameter identifiability problem for graphical models such as linear structural equation models (SEMs) with or without latent variables. However, the limited resolution and efficiency of existing approaches necessarily calls for further development of novel structural identifiability analysis algorithms. An efficient structural identifiability analysis algorithm is developed in this study for a broad range of network structures. The proposed method adopts the Wright's path coefficient method to generate identifiability equations in forms of symbolic polynomials, and then converts these symbolic equations to binary matrices (called identifiability matrix). Several matrix operations are introduced for identifiability matrix reduction with system equivalency maintained. Based on the reduced identifiability matrices, the structural identifiability of each parameter is determined. A number of benchmark models are used to verify the validity of the proposed approach. Finally, the network module for influenza A virus replication is employed as a real example to illustrate the application of the proposed approach in practice. The proposed approach can deal with cyclic networks with latent variables. The key advantage is that it intentionally avoids symbolic computation and is thus highly efficient. Also, this method is capable of determining the identifiability of each single parameter and

14. Cumulative complexity: a functional, patient-centered model of patient complexity can improve research and practice.

Science.gov (United States)

Shippee, Nathan D; Shah, Nilay D; May, Carl R; Mair, Frances S; Montori, Victor M

2012-10-01

To design a functional, patient-centered model of patient complexity with practical applicability to analytic design and clinical practice. Existing literature on patient complexity has mainly identified its components descriptively and in isolation, lacking clarity as to their combined functions in disrupting care or to how complexity changes over time. The authors developed a cumulative complexity model, which integrates existing literature and emphasizes how clinical and social factors accumulate and interact to complicate patient care. A narrative literature review is used to explicate the model. The model emphasizes a core, patient-level mechanism whereby complicating factors impact care and outcomes: the balance between patient workload of demands and patient capacity to address demands. Workload encompasses the demands on the patient's time and energy, including demands of treatment, self-care, and life in general. Capacity concerns ability to handle work (e.g., functional morbidity, financial/social resources, literacy). Workload-capacity imbalances comprise the mechanism driving patient complexity. Treatment and illness burdens serve as feedback loops, linking negative outcomes to further imbalances, such that complexity may accumulate over time. With its components largely supported by existing literature, the model has implications for analytic design, clinical epidemiology, and clinical practice. Copyright © 2012 Elsevier Inc. All rights reserved.

15. Viscous dark energy models with variable G and Λ

International Nuclear Information System (INIS)

Arbab, Arbab I.

2008-01-01

We consider a cosmological model with bulk viscosity η and variable cosmological A ∝ ρ -α , alpha = const and gravitational G constants. The model exhibits many interesting cosmological features. Inflation proceeds due to the presence of bulk viscosity and dark energy without requiring the equation of state p=-ρ. During the inflationary era the energy density ρ does not remain constant, as in the de-Sitter type. Moreover, the cosmological and gravitational constants increase exponentially with time, whereas the energy density and viscosity decrease exponentially with time. The rate of mass creation during inflation is found to be very huge suggesting that all matter in the universe is created during inflation. (author)

16. Calibration of a complex activated sludge model for the full-scale wastewater treatment plant

OpenAIRE

Liwarska-Bizukojc, Ewa; Olejnik, Dorota; Biernacki, Rafal; Ledakowicz, Stanislaw

2011-01-01

In this study, the results of the calibration of the complex activated sludge model implemented in BioWin software for the full-scale wastewater treatment plant are presented. Within the calibration of the model, sensitivity analysis of its parameters and the fractions of carbonaceous substrate were performed. In the steady-state and dynamic calibrations, a successful agreement between the measured and simulated values of the output variables was achieved. Sensitivity analysis revealed that u...

17. Variable thickness transient ground-water flow model. Volume 3. Program listings

International Nuclear Information System (INIS)

Reisenauer, A.E.

1979-12-01

The Assessment of Effectiveness of Geologic Isolation Systems (AEGIS) Program is developing and applying the methodology for assessing the far-field, long-term post-closure safety of deep geologic nuclear waste repositories. AEGIS is being performed by Pacific Northwest Laboratory (PNL) under contract with the Office of Nuclear Waste Isolation (OWNI) for the Department of Energy (DOE). One task within AEGIS is the development of methodology for analysis of the consequences (water pathway) from loss of repository containment as defined by various release scenarios. Analysis of the long-term, far-field consequences of release scenarios requires the application of numerical codes which simulate the hydrologic systems, model the transport of released radionuclides through the hydrologic systems to the biosphere, and, where applicable, assess the radiological dose to humans. Hydrologic and transport models are available at several levels of complexity or sophistication. Model selection and use are determined by the quantity and quality of input data. Model development under AEGIS and related programs provides three levels of hydrologic models, two levels of transport models, and one level of dose models (with several separate models). This is the third of 3 volumes of the description of the VTT (Variable Thickness Transient) Groundwater Hydrologic Model - second level (intermediate complexity) two-dimensional saturated groundwater flow

18. Principles of a simulation model for a variable-speed pitch-regulated wind turbine

Energy Technology Data Exchange (ETDEWEB)

Camblong, H.; Vidal, M.R.; Puiggali, J.R.

2004-07-01

This paper considers the basic principles for establishing a simulation- model of a variable speed, pitch regulated, wind turbine. This model is used to test various control algorithms designed with the aim of maximising energetic yield and robustness and minimising flicker emission and dynamic drive train loads. One of the most complex elements of such a system is the interaction between wind and turbine. First, a detailed and didactic analysis of this interaction is given. This is used to understand some complicated phenomena, and to help design a simpler and more efficient (in terms of processing time) mathematical model. Additional submodels are given for the mechanical coupling, the pitch system and the electrical power system, before the entire model is validated by comparison with filed measurements on a 180 kW turbine. The complete simulation model is flexible, efficient and allows easy evaluation of different control algorithms. (author)

19. Constrained variability of modeled T:ET ratio across biomes

Science.gov (United States)

Fatichi, Simone; Pappas, Christoforos

2017-07-01

A large variability (35-90%) in the ratio of transpiration to total evapotranspiration (referred here as T:ET) across biomes or even at the global scale has been documented by a number of studies carried out with different methodologies. Previous empirical results also suggest that T:ET does not covary with mean precipitation and has a positive dependence on leaf area index (LAI). Here we use a mechanistic ecohydrological model, with a refined process-based description of evaporation from the soil surface, to investigate the variability of T:ET across biomes. Numerical results reveal a more constrained range and higher mean of T:ET (70 ± 9%, mean ± standard deviation) when compared to observation-based estimates. T:ET is confirmed to be independent from mean precipitation, while it is found to be correlated with LAI seasonally but uncorrelated across multiple sites. Larger LAI increases evaporation from interception but diminishes ground evaporation with the two effects largely compensating each other. These results offer mechanistic model-based evidence to the ongoing research about the patterns of T:ET and the factors influencing its magnitude across biomes.

20. Modeling the variability of shapes of a human placenta.

Science.gov (United States)

Yampolsky, M; Salafia, C M; Shlakhter, O; Haas, D; Eucker, B; Thorp, J

2008-09-01

Placentas are generally round/oval in shape, but "irregular" shapes are common. In the Collaborative Perinatal Project data, irregular shapes were associated with lower birth weight for placental weight, suggesting variably shaped placentas have altered function. (I) Using a 3D one-parameter model of placental vascular growth based on Diffusion Limited Aggregation (an accepted model for generating highly branched fractals), models were run with a branching density growth parameter either fixed or perturbed at either 5-7% or 50% of model growth. (II) In a data set with detailed measures of 1207 placental perimeters, radial standard deviations of placental shapes were calculated from the umbilical cord insertion, and from the centroid of the shape (a biologically arbitrary point). These two were compared to the difference between the observed scaling exponent and the Kleiber scaling exponent (0.75), considered optimal for vascular fractal transport systems. Spearman's rank correlation considered pcentroid) was associated with differences from the Kleiber exponent (p=0.006). A dynamical DLA model recapitulates multilobate and "star" placental shapes via changing fractal branching density. We suggest that (1) irregular placental outlines reflect deformation of the underlying placental fractal vascular network, (2) such irregularities in placental outline indicate sub-optimal branching structure of the vascular tree, and (3) this accounts for the lower birth weight observed in non-round/oval placentas in the Collaborative Perinatal Project.

1. Passengers, Crowding and Complexity : Models for passenger oriented public transport

NARCIS (Netherlands)

P.C. Bouman (Paul)

2017-01-01

markdownabstractPassengers, Crowding and Complexity was written as part of the Complexity in Public Transport (ComPuTr) project funded by the Netherlands Organisation for Scientific Research (NWO). This thesis studies in three parts how microscopic data can be used in models that have the potential

2. Stability of Rotor Systems: A Complex Modelling Approach

DEFF Research Database (Denmark)

Kliem, Wolfhard; Pommer, Christian; Stoustrup, Jakob

1996-01-01

A large class of rotor systems can be modelled by a complex matrix differential equation of secondorder. The angular velocity of the rotor plays the role of a parameter. We apply the Lyapunov matrix equation in a complex setting and prove two new stability results which are compared...

3. First worldwide proficiency study on variable-number tandem-repeat typing of Mycobacterium tuberculosis complex strains.

NARCIS (Netherlands)

Beer, J.L. de; Kremer, K.; Kodmon, C.; Supply, P.; Soolingen, D. van

2012-01-01

Although variable-number tandem-repeat (VNTR) typing has gained recognition as the new standard for the DNA fingerprinting of Mycobacterium tuberculosis complex (MTBC) isolates, external quality control programs have not yet been developed. Therefore, we organized the first multicenter proficiency

4. Multi-scale climate modelling over Southern Africa using a variable-resolution global model

CSIR Research Space (South Africa)

Engelbrecht, FA

2011-12-01

Full Text Available -mail: fengelbrecht@csir.co.za Multi-scale climate modelling over Southern Africa using a variable-resolution global model FA Engelbrecht1, 2*, WA Landman1, 3, CJ Engelbrecht4, S Landman5, MM Bopape1, B Roux6, JL McGregor7 and M Thatcher7 1 CSIR Natural... improvement. Keywords: multi-scale climate modelling, variable-resolution atmospheric model Introduction Dynamic climate models have become the primary tools for the projection of future climate change, at both the global and regional scales. Dynamic...

5. Complex versus simple models: ion-channel cardiac toxicity prediction.

Science.gov (United States)

Mistry, Hitesh B

2018-01-01

There is growing interest in applying detailed mathematical models of the heart for ion-channel related cardiac toxicity prediction. However, a debate as to whether such complex models are required exists. Here an assessment in the predictive performance between two established large-scale biophysical cardiac models and a simple linear model B net was conducted. Three ion-channel data-sets were extracted from literature. Each compound was designated a cardiac risk category using two different classification schemes based on information within CredibleMeds. The predictive performance of each model within each data-set for each classification scheme was assessed via a leave-one-out cross validation. Overall the B net model performed equally as well as the leading cardiac models in two of the data-sets and outperformed both cardiac models on the latest. These results highlight the importance of benchmarking complex versus simple models but also encourage the development of simple models.

6. Complex versus simple models: ion-channel cardiac toxicity prediction

Directory of Open Access Journals (Sweden)

Hitesh B. Mistry

2018-02-01

Full Text Available There is growing interest in applying detailed mathematical models of the heart for ion-channel related cardiac toxicity prediction. However, a debate as to whether such complex models are required exists. Here an assessment in the predictive performance between two established large-scale biophysical cardiac models and a simple linear model Bnet was conducted. Three ion-channel data-sets were extracted from literature. Each compound was designated a cardiac risk category using two different classification schemes based on information within CredibleMeds. The predictive performance of each model within each data-set for each classification scheme was assessed via a leave-one-out cross validation. Overall the Bnet model performed equally as well as the leading cardiac models in two of the data-sets and outperformed both cardiac models on the latest. These results highlight the importance of benchmarking complex versus simple models but also encourage the development of simple models.

7. Modeling Air-Quality in Complex Terrain Using Mesoscale and ...

African Journals Online (AJOL)

Air-quality in a complex terrain (Colorado-River-Valley/Grand-Canyon Area, Southwest U.S.) is modeled using a higher-order closure mesoscale model and a higher-order closure dispersion model. Non-reactive tracers have been released in the Colorado-River valley, during winter and summer 1992, to study the ...

8. Surface-complexation models for sorption onto heterogeneous surfaces

International Nuclear Information System (INIS)

Harvey, K.B.

1997-10-01

This report provides a description of the discrete-logK spectrum model, together with a description of its derivation, and of its place in the larger context of surface-complexation modelling. The tools necessary to apply the discrete-logK spectrum model are discussed, and background information appropriate to this discussion is supplied as appendices. (author)

9. Transient modelling of a natural circulation loop under variable pressure

International Nuclear Information System (INIS)

Vianna, Andre L.B.; Faccini, Jose L.H.; Su, Jian; Instituto de Engenharia Nuclear

2017-01-01

The objective of the present work is to model the transient operation of a natural circulation loop, which is one-tenth scale in height to a typical Passive Residual Heat Removal system (PRHR) of an Advanced Pressurized Water Nuclear Reactor and was designed to meet the single and two-phase flow similarity criteria to it. The loop consists of a core barrel with electrically heated rods, upper and lower plena interconnected by hot and cold pipe legs to a seven-tube shell heat exchanger of countercurrent design, and an expansion tank with a descending tube. A long transient characterized the loop operation, during which a phenomenon of self-pressurization, without self-regulation of the pressure, was experimentally observed. This represented a unique situation, named natural circulation under variable pressure (NCVP). The self-pressurization was originated in the air trapped in the expansion tank and compressed by the loop water dilatation, as it heated up during each experiment. The mathematical model, initially oriented to the single-phase flow, included the heat capacity of the structure and employed a cubic polynomial approximation for the density, in the buoyancy term calculation. The heater was modelled taking into account the different heat capacities of the heating elements and the heater walls. The heat exchanger was modelled considering the coolant heating, during the heat exchanging process. The self-pressurization was modelled as an isentropic compression of a perfect gas. The whole model was computationally implemented via a set of finite difference equations. The corresponding computational algorithm of solution was of the explicit, marching type, as for the time discretization, in an upwind scheme, regarding the space discretization. The computational program was implemented in MATLAB. Several experiments were carried out in the natural circulation loop, having the coolant flow rate and the heating power as control parameters. The variables used in the

10. Transient modelling of a natural circulation loop under variable pressure

Energy Technology Data Exchange (ETDEWEB)

Vianna, Andre L.B.; Faccini, Jose L.H.; Su, Jian, E-mail: avianna@nuclear.ufrj.br, E-mail: sujian@nuclear.ufrj.br, E-mail: faccini@ien.gov.br [Coordenacao de Pos-Graduacao e Pesquisa de Engenharia (COPPE/UFRJ), Rio de Janeiro, RJ (Brazil). Programa de Engenharia Nuclear; Instituto de Engenharia Nuclear (IEN/CNEN-RJ), Rio de Janeiro, RJ (Brazil). Lab. de Termo-Hidraulica Experimental

2017-07-01

The objective of the present work is to model the transient operation of a natural circulation loop, which is one-tenth scale in height to a typical Passive Residual Heat Removal system (PRHR) of an Advanced Pressurized Water Nuclear Reactor and was designed to meet the single and two-phase flow similarity criteria to it. The loop consists of a core barrel with electrically heated rods, upper and lower plena interconnected by hot and cold pipe legs to a seven-tube shell heat exchanger of countercurrent design, and an expansion tank with a descending tube. A long transient characterized the loop operation, during which a phenomenon of self-pressurization, without self-regulation of the pressure, was experimentally observed. This represented a unique situation, named natural circulation under variable pressure (NCVP). The self-pressurization was originated in the air trapped in the expansion tank and compressed by the loop water dilatation, as it heated up during each experiment. The mathematical model, initially oriented to the single-phase flow, included the heat capacity of the structure and employed a cubic polynomial approximation for the density, in the buoyancy term calculation. The heater was modelled taking into account the different heat capacities of the heating elements and the heater walls. The heat exchanger was modelled considering the coolant heating, during the heat exchanging process. The self-pressurization was modelled as an isentropic compression of a perfect gas. The whole model was computationally implemented via a set of finite difference equations. The corresponding computational algorithm of solution was of the explicit, marching type, as for the time discretization, in an upwind scheme, regarding the space discretization. The computational program was implemented in MATLAB. Several experiments were carried out in the natural circulation loop, having the coolant flow rate and the heating power as control parameters. The variables used in the

11. NULLIJN, a program to calculate zero curves of a function of two variables of which one may be complex

International Nuclear Information System (INIS)

Jagher, P.C. de

1978-01-01

When an algorithm for a function f of two variables, for instance a dispersion function f(ω, k) or a potential V(r, z), is known, the program calculates and plots the zero curves, thus giving a graphical representation of an implicitly defined function. One of the variables may be complex. A quadratic extrapolation, followed by a regula falsi algorithm to find a zero is used to calculate a succession of zero-points along a curve. The starting point of a curve is found by detecting a change of sign of the function on the edge of the area G that is examined. Curves that lie entirely inside G are not found. Starting points of curves where the imaginary part of the complex variable is large might be missed. (Auth.)

12. On spin and matrix models in the complex plane

International Nuclear Information System (INIS)

Damgaard, P.H.; Heller, U.M.

1993-01-01

We describe various aspects of statistical mechanics defined in the complex temperature or coupling-constant plane. Using exactly solvable models, we analyse such aspects as renormalization group flows in the complex plane, the distribution of partition function zeros, and the question of new coupling-constant symmetries of complex-plane spin models. The double-scaling form of matrix models is shown to be exactly equivalent to finite-size scaling of two-dimensional spin systems. This is used to show that the string susceptibility exponents derived from matrix models can be obtained numerically with very high accuracy from the scaling of finite-N partition function zeros in the complex plane. (orig.)

13. A Framework for Modeling and Analyzing Complex Distributed Systems

National Research Council Canada - National Science Library

Lynch, Nancy A; Shvartsman, Alex Allister

2005-01-01

Report developed under STTR contract for topic AF04-T023. This Phase I project developed a modeling language and laid a foundation for computational support tools for specifying, analyzing, and verifying complex distributed system designs...

14. Modelling the self-organization and collapse of complex networks

Modelling the self-organization and collapse of complex networks. Sanjay Jain Department of Physics and Astrophysics, University of Delhi Jawaharlal Nehru Centre for Advanced Scientific Research, Bangalore Santa Fe Institute, Santa Fe, New Mexico.

15. Uncertainty and variability in computational and mathematical models of cardiac physiology.

Science.gov (United States)

Mirams, Gary R; Pathmanathan, Pras; Gray, Richard A; Challenor, Peter; Clayton, Richard H

2016-12-01

Mathematical and computational models of cardiac physiology have been an integral component of cardiac electrophysiology since its inception, and are collectively known as the Cardiac Physiome. We identify and classify the numerous sources of variability and uncertainty in model formulation, parameters and other inputs that arise from both natural variation in experimental data and lack of knowledge. The impact of uncertainty on the outputs of Cardiac Physiome models is not well understood, and this limits their utility as clinical tools. We argue that incorporating variability and uncertainty should be a high priority for the future of the Cardiac Physiome. We suggest investigating the adoption of approaches developed in other areas of science and engineering while recognising unique challenges for the Cardiac Physiome; it is likely that novel methods will be necessary that require engagement with the mathematics and statistics community. The Cardiac Physiome effort is one of the most mature and successful applications of mathematical and computational modelling for describing and advancing the understanding of physiology. After five decades of development, physiological cardiac models are poised to realise the promise of translational research via clinical applications such as drug development and patient-specific approaches as well as ablation, cardiac resynchronisation and contractility modulation therapies. For models to be included as a vital component of the decision process in safety-critical applications, rigorous assessment of model credibility will be required. This White Paper describes one aspect of this process by identifying and classifying sources of variability and uncertainty in models as well as their implications for the application and development of cardiac models. We stress the need to understand and quantify the sources of variability and uncertainty in model inputs, and the impact of model structure and complexity and their consequences for

16. Size and complexity in model financial systems

Science.gov (United States)

Arinaminpathy, Nimalan; Kapadia, Sujit; May, Robert M.

2012-01-01

The global financial crisis has precipitated an increasing appreciation of the need for a systemic perspective toward financial stability. For example: What role do large banks play in systemic risk? How should capital adequacy standards recognize this role? How is stability shaped by concentration and diversification in the financial system? We explore these questions using a deliberately simplified, dynamic model of a banking system that combines three different channels for direct transmission of contagion from one bank to another: liquidity hoarding, asset price contagion, and the propagation of defaults via counterparty credit risk. Importantly, we also introduce a mechanism for capturing how swings in “confidence” in the system may contribute to instability. Our results highlight that the importance of relatively large, well-connected banks in system stability scales more than proportionately with their size: the impact of their collapse arises not only from their connectivity, but also from their effect on confidence in the system. Imposing tougher capital requirements on larger banks than smaller ones can thus enhance the resilience of the system. Moreover, these effects are more pronounced in more concentrated systems, and continue to apply, even when allowing for potential diversification benefits that may be realized by larger banks. We discuss some tentative implications for policy, as well as conceptual analogies in ecosystem stability and in the control of infectious diseases. PMID:23091020

17. Algebraic computability and enumeration models recursion theory and descriptive complexity

CERN Document Server

Nourani, Cyrus F

2016-01-01

This book, Algebraic Computability and Enumeration Models: Recursion Theory and Descriptive Complexity, presents new techniques with functorial models to address important areas on pure mathematics and computability theory from the algebraic viewpoint. The reader is first introduced to categories and functorial models, with Kleene algebra examples for languages. Functorial models for Peano arithmetic are described toward important computational complexity areas on a Hilbert program, leading to computability with initial models. Infinite language categories are also introduced to explain descriptive complexity with recursive computability with admissible sets and urelements. Algebraic and categorical realizability is staged on several levels, addressing new computability questions with omitting types realizably. Further applications to computing with ultrafilters on sets and Turing degree computability are examined. Functorial models computability is presented with algebraic trees realizing intuitionistic type...

18. Modeling of Complex Life Cycle Prediction Based on Cell Division

Directory of Open Access Journals (Sweden)

Fucheng Zhang

2017-01-01

Full Text Available Effective fault diagnosis and reasonable life expectancy are of great significance and practical engineering value for the safety, reliability, and maintenance cost of equipment and working environment. At present, the life prediction methods of the equipment are equipment life prediction based on condition monitoring, combined forecasting model, and driven data. Most of them need to be based on a large amount of data to achieve the problem. For this issue, we propose learning from the mechanism of cell division in the organism. We have established a moderate complexity of life prediction model across studying the complex multifactor correlation life model. In this paper, we model the life prediction of cell division. Experiments show that our model can effectively simulate the state of cell division. Through the model of reference, we will use it for the equipment of the complex life prediction.

19. Applications of Nonlinear Dynamics Model and Design of Complex Systems

CERN Document Server

In, Visarath; Palacios, Antonio

2009-01-01

This edited book is aimed at interdisciplinary, device-oriented, applications of nonlinear science theory and methods in complex systems. In particular, applications directed to nonlinear phenomena with space and time characteristics. Examples include: complex networks of magnetic sensor systems, coupled nano-mechanical oscillators, nano-detectors, microscale devices, stochastic resonance in multi-dimensional chaotic systems, biosensors, and stochastic signal quantization. "applications of nonlinear dynamics: model and design of complex systems" brings together the work of scientists and engineers that are applying ideas and methods from nonlinear dynamics to design and fabricate complex systems.

20. Coping with Complexity Model Reduction and Data Analysis

CERN Document Server

Gorban, Alexander N

2011-01-01

This volume contains the extended version of selected talks given at the international research workshop 'Coping with Complexity: Model Reduction and Data Analysis', Ambleside, UK, August 31 - September 4, 2009. This book is deliberately broad in scope and aims at promoting new ideas and methodological perspectives. The topics of the chapters range from theoretical analysis of complex and multiscale mathematical models to applications in e.g., fluid dynamics and chemical kinetics.

1. Mathematical Models to Determine Stable Behavior of Complex Systems

Science.gov (United States)

Sumin, V. I.; Dushkin, A. V.; Smolentseva, T. E.

2018-05-01

The paper analyzes a possibility to predict functioning of a complex dynamic system with a significant amount of circulating information and a large number of random factors impacting its functioning. Functioning of the complex dynamic system is described as a chaotic state, self-organized criticality and bifurcation. This problem may be resolved by modeling such systems as dynamic ones, without applying stochastic models and taking into account strange attractors.

2. Persistence of urban organic aerosols composition: Decoding their structural complexity and seasonal variability

International Nuclear Information System (INIS)

Matos, João T.V.; Duarte, Regina M.B.O.; Lopes, Sónia P.; Silva, Artur M.S.; Duarte, Armando C.

2017-01-01

Organic Aerosols (OAs) are typically defined as highly complex matrices whose composition changes in time and space. Focusing on time vector, this work uses two-dimensional nuclear magnetic resonance (2D NMR) techniques to examine the structural features of water-soluble (WSOM) and alkaline-soluble organic matter (ASOM) sequentially extracted from fine atmospheric aerosols collected in an urban setting during cold and warm seasons. This study reveals molecular signatures not previously decoded in NMR-related studies of OAs as meaningful source markers. Although the ASOM is less hydrophilic and structurally diverse than its WSOM counterpart, both fractions feature a core with heteroatom-rich branched aliphatics from both primary (natural and anthropogenic) and secondary origin, aromatic secondary organics originated from anthropogenic aromatic precursors, as well as primary saccharides and amino sugar derivatives from biogenic emissions. These common structures represent those 2D NMR spectral signatures that are present in both seasons and can thus be seen as an “annual background” profile of the structural composition of OAs at the urban location. Lignin-derived structures, nitroaromatics, disaccharides, and anhydrosaccharides signatures were also identified in the WSOM samples only from periods identified as smoke impacted, which reflects the influence of biomass-burning sources. The NMR dataset on the H–C molecules backbone was also used to propose a semi-quantitative structural model of urban WSOM, which will aid efforts for more realistic studies relating the chemical properties of OAs with their atmospheric behavior. - Highlights: • 2D NMR spectroscopy was used to decode urban organic aerosols. • Water and alkaline soluble components of urban organic aerosols have been compared. • Persistence of urban organic aerosols composition across different seasons. • Annual background profile of the structural features of urban organic aerosols. • Semi

3. Understanding complex urban systems multidisciplinary approaches to modeling

CERN Document Server

Gurr, Jens; Schmidt, J

2014-01-01

Understanding Complex Urban Systems takes as its point of departure the insight that the challenges of global urbanization and the complexity of urban systems cannot be understood – let alone ‘managed’ – by sectoral and disciplinary approaches alone. But while there has recently been significant progress in broadening and refining the methodologies for the quantitative modeling of complex urban systems, in deepening the theoretical understanding of cities as complex systems, or in illuminating the implications for urban planning, there is still a lack of well-founded conceptual thinking on the methodological foundations and the strategies of modeling urban complexity across the disciplines. Bringing together experts from the fields of urban and spatial planning, ecology, urban geography, real estate analysis, organizational cybernetics, stochastic optimization, and literary studies, as well as specialists in various systems approaches and in transdisciplinary methodologies of urban analysis, the volum...

4. Dynamic complexities in a parasitoid-host-parasitoid ecological model

International Nuclear Information System (INIS)

Yu Hengguo; Zhao Min; Lv Songjuan; Zhu Lili

2009-01-01

Chaotic dynamics have been observed in a wide range of population models. In this study, the complex dynamics in a discrete-time ecological model of parasitoid-host-parasitoid are presented. The model shows that the superiority coefficient not only stabilizes the dynamics, but may strongly destabilize them as well. Many forms of complex dynamics were observed, including pitchfork bifurcation with quasi-periodicity, period-doubling cascade, chaotic crisis, chaotic bands with narrow or wide periodic window, intermittent chaos, and supertransient behavior. Furthermore, computation of the largest Lyapunov exponent demonstrated the chaotic dynamic behavior of the model

5. Dynamic complexities in a parasitoid-host-parasitoid ecological model

Energy Technology Data Exchange (ETDEWEB)

Yu Hengguo [School of Mathematic and Information Science, Wenzhou University, Wenzhou, Zhejiang 325035 (China); Zhao Min [School of Life and Environmental Science, Wenzhou University, Wenzhou, Zhejiang 325027 (China)], E-mail: zmcn@tom.com; Lv Songjuan; Zhu Lili [School of Mathematic and Information Science, Wenzhou University, Wenzhou, Zhejiang 325035 (China)

2009-01-15

Chaotic dynamics have been observed in a wide range of population models. In this study, the complex dynamics in a discrete-time ecological model of parasitoid-host-parasitoid are presented. The model shows that the superiority coefficient not only stabilizes the dynamics, but may strongly destabilize them as well. Many forms of complex dynamics were observed, including pitchfork bifurcation with quasi-periodicity, period-doubling cascade, chaotic crisis, chaotic bands with narrow or wide periodic window, intermittent chaos, and supertransient behavior. Furthermore, computation of the largest Lyapunov exponent demonstrated the chaotic dynamic behavior of the model.

6. Modeling the influence of atmospheric leading modes on the variability of the Arctic freshwater cycle

Science.gov (United States)

Niederdrenk, L.; Sein, D.; Mikolajewicz, U.

2013-12-01

Global general circulation models show remarkable differences in modeling the Arctic freshwater cycle. While they agree on the general sinks and sources of the freshwater budget, they differ largely in the magnitude of the mean values as well as in the variability of the freshwater terms. Regional models can better resolve the complex topography and small scale processes, but they are often uncoupled, thus missing the air-sea interaction. Additionally, regional models mostly use some kind of salinity restoring or flux correction, thus disturbing the freshwater budget. Our approach to investigate the Arctic hydrologic cycle and its variability is a regional atmosphere-ocean model setup, consisting of the global ocean model MPIOM with high resolution in the Arctic coupled to the regional atmosphere model REMO. The domain of the atmosphere model covers all catchment areas of the rivers draining into the Arctic. To account for all sinks and sources of freshwater in the Arctic, we include a discharge model providing terrestrial lateral waterflows. We run the model without salinity restoring but with freshwater correction, which is set to zero in the Arctic. This allows for the analysis of a closed freshwater budget in the Artic region. We perform experiments for the second half of the 20th century and use data from the global model MPIOM/ECHAM5 performed with historical conditions, that was used within the 4th Assessment Report of the IPCC, as forcing for our regional model. With this setup, we investigate how the dominant modes of large-scale atmospheric variability impact the variability in the freshwater components. We focus on the two leading empirical orthogonal functions of winter mean sea level pressure, as well as on the North Atlantic Oscillation and the Siberian High. These modes have a large impact on the Arctic Ocean circulation as well as on the solid and liquid export through Fram Strait and through the Canadian archipelago. However, they cannot explain

7. Applying nonlinear MODM model to supply chain management with quantity discount policy under complex fuzzy environment

Directory of Open Access Journals (Sweden)

Zhe Zhang

2014-06-01

Full Text Available Purpose: The aim of this paper is to deal with the supply chain management (SCM with quantity discount policy under the complex fuzzy environment, which is characterized as the bi-fuzzy variables. By taking into account the strategy and the process of decision making, a bi-fuzzy nonlinear multiple objective decision making (MODM model is presented to solve the proposed problem.Design/methodology/approach: The bi-fuzzy variables in the MODM model are transformed into the trapezoidal fuzzy variables by the DMs's degree of optimism ?1 and ?2, which are de-fuzzified by the expected value index subsequently. For solving the complex nonlinear model, a multi-objective adaptive particle swarm optimization algorithm (MO-APSO is designed as the solution method.Findings: The proposed model and algorithm are applied to a typical example of SCM problem to illustrate the effectiveness. Based on the sensitivity analysis of the results, the bi-fuzzy nonlinear MODM SCM model is proved to be sensitive to the possibility level ?1.Practical implications: The study focuses on the SCM under complex fuzzy environment in SCM, which has a great practical significance. Therefore, the bi-fuzzy MODM model and MO-APSO can be further applied in SCM problem with quantity discount policy.Originality/value: The bi-fuzzy variable is employed in the nonlinear MODM model of SCM to characterize the hybrid uncertain environment, and this work is original. In addition, the hybrid crisp approach is proposed to transferred to model to an equivalent crisp one by the DMs's degree of optimism and the expected value index. Since the MODM model consider the bi-fuzzy environment and quantity discount policy, so this paper has a great practical significance.

8. On the explaining-away phenomenon in multivariate latent variable models.

Science.gov (United States)

van Rijn, Peter; Rijmen, Frank

2015-02-01

Many probabilistic models for psychological and educational measurements contain latent variables. Well-known examples are factor analysis, item response theory, and latent class model families. We discuss what is referred to as the 'explaining-away' phenomenon in the context of such latent variable models. This phenomenon can occur when multiple latent variables are related to the same observed variable, and can elicit seemingly counterintuitive conditional dependencies between latent variables given observed variables. We illustrate the implications of explaining away for a number of well-known latent variable models by using both theoretical and real data examples. © 2014 The British Psychological Society.

9. A marketing mix model for a complex and turbulent environment

Directory of Open Access Journals (Sweden)

R. B. Mason

2007-12-01

Full Text Available Purpose: This paper is based on the proposition that the choice of marketing tactics is determined, or at least significantly influenced, by the nature of the companys external environment. It aims to illustrate the type of marketing mix tactics that are suggested for a complex and turbulent environment when marketing and the environment are viewed through a chaos and complexity theory lens. Design/Methodology/Approach: Since chaos and complexity theories are proposed as a good means of understanding the dynamics of complex and turbulent markets, a comprehensive review and analysis of literature on the marketing mix and marketing tactics from a chaos and complexity viewpoint was conducted. From this literature review, a marketing mix model was conceptualised. Findings: A marketing mix model considered appropriate for success in complex and turbulent environments was developed. In such environments, the literature suggests destabilising marketing activities are more effective, whereas stabilising type activities are more effective in simple, stable environments. Therefore the model proposes predominantly destabilising type tactics as appropriate for a complex and turbulent environment such as is currently being experienced in South Africa. Implications: This paper is of benefit to marketers by emphasising a new way to consider the future marketing activities of their companies. How this model can assist marketers and suggestions for research to develop and apply this model are provided. It is hoped that the model suggested will form the basis of empirical research to test its applicability in the turbulent South African environment. Originality/Value: Since businesses and markets are complex adaptive systems, using complexity theory to understand how to cope in complex, turbulent environments is necessary, but has not been widely researched. In fact, most chaos and complexity theory work in marketing has concentrated on marketing strategy, with

10. Using multiple biomarkers and determinants to obtain a better measurement of oxidative stress: a latent variable structural equation model approach.

Science.gov (United States)

Eldridge, Ronald C; Flanders, W Dana; Bostick, Roberd M; Fedirko, Veronika; Gross, Myron; Thyagarajan, Bharat; Goodman, Michael

2017-09-01

Since oxidative stress involves a variety of cellular changes, no single biomarker can serve as a complete measure of this complex biological process. The analytic technique of structural equation modeling (SEM) provides a possible solution to this problem by modelling a latent (unobserved) variable constructed from the covariance of multiple biomarkers. Using three pooled datasets, we modelled a latent oxidative stress variable from five biomarkers related to oxidative stress: F 2 -isoprostanes (FIP), fluorescent oxidation products, mitochondrial DNA copy number, γ-tocopherol (Gtoc) and C-reactive protein (CRP, an inflammation marker closely linked to oxidative stress). We validated the latent variable by assessing its relation to pro- and anti-oxidant exposures. FIP, Gtoc and CRP characterized the latent oxidative stress variable. Obesity, smoking, aspirin use and β-carotene were statistically significantly associated with oxidative stress in the theorized directions; the same exposures were weakly and inconsistently associated with the individual biomarkers. Our results suggest that using SEM with latent variables decreases the biomarker-specific variability, and may produce a better measure of oxidative stress than do single variables. This methodology can be applied to similar areas of research in which a single biomarker is not sufficient to fully describe a complex biological phenomenon.

11. Total Variability Modeling using Source-specific Priors

DEFF Research Database (Denmark)

Shepstone, Sven Ewan; Lee, Kong Aik; Li, Haizhou

2016-01-01

sequence of an utterance. In both cases the prior for the latent variable is assumed to be non-informative, since for homogeneous datasets there is no gain in generality in using an informative prior. This work shows in the heterogeneous case, that using informative priors for com- puting the posterior......, can lead to favorable results. We focus on modeling the priors using minimum divergence criterion or fac- tor analysis techniques. Tests on the NIST 2008 and 2010 Speaker Recognition Evaluation (SRE) dataset show that our proposed method beats four baselines: For i-vector extraction using an already...... trained matrix, for the short2-short3 task in SRE’08, five out of eight female and four out of eight male common conditions, were improved. For the core-extended task in SRE’10, four out of nine female and six out of nine male common conditions were improved. When incorporating prior information...

12. Generalized complex geometry, generalized branes and the Hitchin sigma model

International Nuclear Information System (INIS)

Zucchini, Roberto

2005-01-01

Hitchin's generalized complex geometry has been shown to be relevant in compactifications of superstring theory with fluxes and is expected to lead to a deeper understanding of mirror symmetry. Gualtieri's notion of generalized complex submanifold seems to be a natural candidate for the description of branes in this context. Recently, we introduced a Batalin-Vilkovisky field theoretic realization of generalized complex geometry, the Hitchin sigma model, extending the well known Poisson sigma model. In this paper, exploiting Gualtieri's formalism, we incorporate branes into the model. A detailed study of the boundary conditions obeyed by the world sheet fields is provided. Finally, it is found that, when branes are present, the classical Batalin-Vilkovisky cohomology contains an extra sector that is related non trivially to a novel cohomology associated with the branes as generalized complex submanifolds. (author)

13. Reassessing Geophysical Models of the Bushveld Complex in 3D

Science.gov (United States)

Cole, J.; Webb, S. J.; Finn, C.

2012-12-01

Conceptual geophysical models of the Bushveld Igneous Complex show three possible geometries for its mafic component: 1) Separate intrusions with vertical feeders for the eastern and western lobes (Cousins, 1959) 2) Separate dipping sheets for the two lobes (Du Plessis and Kleywegt, 1987) 3) A single saucer-shaped unit connected at depth in the central part between the two lobes (Cawthorn et al, 1998) Model three incorporates isostatic adjustment of the crust in response to the weight of the dense mafic material. The model was corroborated by results of a broadband seismic array over southern Africa, known as the Southern African Seismic Experiment (SASE) (Nguuri, et al, 2001; Webb et al, 2004). This new information about the crustal thickness only became available in the last decade and could not be considered in the earlier models. Nevertheless, there is still on-going debate as to which model is correct. All of the models published up to now have been done in 2 or 2.5 dimensions. This is not well suited to modelling the complex geometry of the Bushveld intrusion. 3D modelling takes into account effects of variations in geometry and geophysical properties of lithologies in a full three dimensional sense and therefore affects the shape and amplitude of calculated fields. The main question is how the new knowledge of the increased crustal thickness, as well as the complexity of the Bushveld Complex, will impact on the gravity fields calculated for the existing conceptual models, when modelling in 3D. The three published geophysical models were remodelled using full 3Dl potential field modelling software, and including crustal thickness obtained from the SASE. The aim was not to construct very detailed models, but to test the existing conceptual models in an equally conceptual way. Firstly a specific 2D model was recreated in 3D, without crustal thickening, to establish the difference between 2D and 3D results. Then the thicker crust was added. Including the less

14. Structurally related hydrazone-based metal complexes with different antitumor activities variably induce apoptotic cell death.

Science.gov (United States)

Megger, Dominik A; Rosowski, Kristin; Radunsky, Christian; Kösters, Jutta; Sitek, Barbara; Müller, Jens

2017-04-05

Three new complexes bearing the tridentate hydrazone-based ligand 2-(2-(1-(pyridin-2-yl)ethylidene)hydrazinyl)pyridine (L) were synthesized and structurally characterized. Biological tests indicate that the Zn(ii) complex [ZnCl 2 (L)] is of low cytotoxicity against the hepatocellular carcinoma cell line HepG2. In contrast, the Cu(ii) and Mn(ii) complexes [CuCl 2 (L)] and [MnCl 2 (L)] are highly cytotoxic with EC 50 values of 1.25 ± 0.01 μM and 20 ± 1 μM, respectively. A quantitative proteome analysis reveals that treatment of the cells with the Cu(ii) complex leads to a significantly altered abundance of 102 apoptosis-related proteins, whereas 38 proteins were up- or down-regulated by the Mn(ii) complex. A closer inspection of those proteins regulated only by the Cu(ii) complex suggests that the superior cytotoxic activity of this complex is likely to be related to an initiation of the caspase-independent cell death (CICD). In addition, an increased generation of reactive oxygen species (ROS) and a strong up-regulation of proteins responsive to oxidative stress suggest that alterations of the cellular redox metabolism likely contribute to the cytotoxicity of the Cu(ii) complex.

15. Persistence of urban organic aerosols composition: Decoding their structural complexity and seasonal variability.

Science.gov (United States)

Matos, João T V; Duarte, Regina M B O; Lopes, Sónia P; Silva, Artur M S; Duarte, Armando C

2017-12-01

Organic Aerosols (OAs) are typically defined as highly complex matrices whose composition changes in time and space. Focusing on time vector, this work uses two-dimensional nuclear magnetic resonance (2D NMR) techniques to examine the structural features of water-soluble (WSOM) and alkaline-soluble organic matter (ASOM) sequentially extracted from fine atmospheric aerosols collected in an urban setting during cold and warm seasons. This study reveals molecular signatures not previously decoded in NMR-related studies of OAs as meaningful source markers. Although the ASOM is less hydrophilic and structurally diverse than its WSOM counterpart, both fractions feature a core with heteroatom-rich branched aliphatics from both primary (natural and anthropogenic) and secondary origin, aromatic secondary organics originated from anthropogenic aromatic precursors, as well as primary saccharides and amino sugar derivatives from biogenic emissions. These common structures represent those 2D NMR spectral signatures that are present in both seasons and can thus be seen as an "annual background" profile of the structural composition of OAs at the urban location. Lignin-derived structures, nitroaromatics, disaccharides, and anhydrosaccharides signatures were also identified in the WSOM samples only from periods identified as smoke impacted, which reflects the influence of biomass-burning sources. The NMR dataset on the H-C molecules backbone was also used to propose a semi-quantitative structural model of urban WSOM, which will aid efforts for more realistic studies relating the chemical properties of OAs with their atmospheric behavior. Copyright © 2017 Elsevier Ltd. All rights reserved.

16. Complexity and variability of gut commensal microbiota in polyphagous lepidopteran larvae.

Directory of Open Access Journals (Sweden)

Xiaoshu Tang

Full Text Available BACKGROUND: The gut of most insects harbours nonpathogenic microorganisms. Recent work suggests that gut microbiota not only provide nutrients, but also involve in the development and maintenance of the host immune system. However, the complexity, dynamics and types of interactions between the insect hosts and their gut microbiota are far from being well understood. METHODS/PRINCIPAL FINDINGS: To determine the composition of the gut microbiota of two lepidopteran pests, Spodoptera littoralis and Helicoverpa armigera, we applied cultivation-independent techniques based on 16S rRNA gene sequencing and microarray. The two insect species were very similar regarding high abundant bacterial families. Different bacteria colonize different niches within the gut. A core community, consisting of Enterococci, Lactobacilli, Clostridia, etc. was revealed in the insect larvae. These bacteria are constantly present in the digestion tract at relatively high frequency despite that developmental stage and diet had a great impact on shaping the bacterial communities. Some low-abundant species might become dominant upon loading external disturbances; the core community, however, did not change significantly. Clearly the insect gut selects for particular bacterial phylotypes. CONCLUSIONS: Because of their importance as agricultural pests, phytophagous Lepidopterans are widely used as experimental models in ecological and physiological studies. Our results demonstrated that a core microbial community exists in the insect gut, which may contribute to the host physiology. Host physiology and food, nevertheless, significantly influence some fringe bacterial species in the gut. The gut microbiota might also serve as a reservoir of microorganisms for ever-changing environments. Understanding these interactions might pave the way for developing novel pest control strategies.

17. Improved variable reduction in partial least squares modelling by Global-Minimum Error Uninformative-Variable Elimination.

Science.gov (United States)

Andries, Jan P M; Vander Heyden, Yvan; Buydens, Lutgarde M C

2017-08-22

The calibration performance of Partial Least Squares regression (PLS) can be improved by eliminating uninformative variables. For PLS, many variable elimination methods have been developed. One is the Uninformative-Variable Elimination for PLS (UVE-PLS). However, the number of variables retained by UVE-PLS is usually still large. In UVE-PLS, variable elimination is repeated as long as the root mean squared error of cross validation (RMSECV) is decreasing. The set of variables in this first local minimum is retained. In this paper, a modification of UVE-PLS is proposed and investigated, in which UVE is repeated until no further reduction in variables is possible, followed by a search for the global RMSECV minimum. The method is called Global-Minimum Error Uninformative-Variable Elimination for PLS, denoted as GME-UVE-PLS or simply GME-UVE. After each iteration, the predictive ability of the PLS model, built with the remaining variable set, is assessed by RMSECV. The variable set with the global RMSECV minimum is then finally selected. The goal is to obtain smaller sets of variables with similar or improved predictability than those from the classical UVE-PLS method. The performance of the GME-UVE-PLS method is investigated using four data sets, i.e. a simulated set, NIR and NMR spectra, and a theoretical molecular descriptors set, resulting in twelve profile-response (X-y) calibrations. The selective and predictive performances of the models resulting from GME-UVE-PLS are statistically compared to those from UVE-PLS and 1-step UVE, one-sided paired t-tests. The results demonstrate that variable reduction with the proposed GME-UVE-PLS method, usually eliminates significantly more variables than the classical UVE-PLS, while the predictive abilities of the resulting models are better. With GME-UVE-PLS, a lower number of uninformative variables, without a chemical meaning for the response, may be retained than with UVE-PLS. The selectivity of the classical UVE method

18. Examples of EOS Variables as compared to the UMM-Var Data Model

Science.gov (United States)

Cantrell, Simon; Lynnes, Chris

2016-01-01

In effort to provide EOSDIS clients a way to discover and use variable data from different providers, a Unified Metadata Model for Variables is being created. This presentation gives an overview of the model and use cases we are handling.

19. Complexation and molecular modeling studies of europium(III)-gallic acid-amino acid complexes.

Science.gov (United States)

Taha, Mohamed; Khan, Imran; Coutinho, João A P

2016-04-01

With many metal-based drugs extensively used today in the treatment of cancer, attention has focused on the development of new coordination compounds with antitumor activity with europium(III) complexes recently introduced as novel anticancer drugs. The aim of this work is to design new Eu(III) complexes with gallic acid, an antioxida'nt phenolic compound. Gallic acid was chosen because it shows anticancer activity without harming health cells. As antioxidant, it helps to protect human cells against oxidative damage that implicated in DNA damage, cancer, and accelerated cell aging. In this work, the formation of binary and ternary complexes of Eu(III) with gallic acid, primary ligand, and amino acids alanine, leucine, isoleucine, and tryptophan was studied by glass electrode potentiometry in aqueous solution containing 0.1M NaNO3 at (298.2 ± 0.1) K. Their overall stability constants were evaluated and the concentration distributions of the complex species in solution were calculated. The protonation constants of gallic acid and amino acids were also determined at our experimental conditions and compared with those predicted by using conductor-like screening model for realistic solvation (COSMO-RS) model. The geometries of Eu(III)-gallic acid complexes were characterized by the density functional theory (DFT). The spectroscopic UV-visible and photoluminescence measurements are carried out to confirm the formation of Eu(III)-gallic acid complexes in aqueous solutions. Copyright © 2016 Elsevier Inc. All rights reserved.

20. Foundations for Streaming Model Transformations by Complex Event Processing.

Science.gov (United States)

Dávid, István; Ráth, István; Varró, Dániel

2018-01-01

Streaming model transformations represent a novel class of transformations to manipulate models whose elements are continuously produced or modified in high volume and with rapid rate of change. Executing streaming transformations requires efficient techniques to recognize activated transformation rules over a live model and a potentially infinite stream of events. In this paper, we propose foundations of streaming model transformations by innovatively integrating incremental model query, complex event processing (CEP) and reactive (event-driven) transformation techniques. Complex event processing allows to identify relevant patterns and sequences of events over an event stream. Our approach enables event streams to include model change events which are automatically and continuously populated by incremental model queries. Furthermore, a reactive rule engine carries out transformations on identified complex event patterns. We provide an integrated domain-specific language with precise semantics for capturing complex event patterns and streaming transformations together with an execution engine, all of which is now part of the Viatra reactive transformation framework. We demonstrate the feasibility of our approach with two case studies: one in an advanced model engineering workflow; and one in the context of on-the-fly gesture recognition.

1. Universal correlators for multi-arc complex matrix models

International Nuclear Information System (INIS)

Akemann, G.

1997-01-01

The correlation functions of the multi-arc complex matrix model are shown to be universal for any finite number of arcs. The universality classes are characterized by the support of the eigenvalue density and are conjectured to fall into the same classes as the ones recently found for the Hermitian model. This is explicitly shown to be true for the case of two arcs, apart from the known result for one arc. The basic tool is the iterative solution of the loop equation for the complex matrix model with multiple arcs, which provides all multi-loop correlators up to an arbitrary genus. Explicit results for genus one are given for any number of arcs. The two-arc solution is investigated in detail, including the double-scaling limit. In addition universal expressions for the string susceptibility are given for both the complex and Hermitian model. (orig.)

2. Unified models of interactions with gauge-invariant variables

International Nuclear Information System (INIS)

Zet, Gheorghe

2000-01-01

A model of gauge theory is formulated in terms of gauge-invariant variables over a 4-dimensional space-time. Namely, we define a metric tensor g μν ( μ , ν = 0,1,2,3) starting with the components F μν a and F μν a tilde of the tensor associated to the Yang-Mills fields and its dual: g μν = 1/(3Δ 1/3 ) (ε abc F μα a F αβ b tilde F βν c ). Here Δ is a scale factor which can be chosen of a convenient form so that the theory may be self-dual or not. The components g μν are interpreted as new gauge-invariant variables. The model is applied to the case when the gauge group is SU(2). For the space-time we choose two different manifolds: (i) the space-time is R x S 3 , where R is the real line and S 3 is the three-dimensional sphere; (ii) the space-time is endowed with axial symmetry. We calculate the components g μν of the new metric for the two cases in terms of SU(2) gauge potentials. Imposing the supplementary condition that the new metric coincides with the initial metric of the space-time, we obtain the field equations (of the first order in derivatives) for the gauge fields. In addition, we determine the scale factor Δ which is introduced in the definition of g μν to ensure the property of self-duality for our SU(2) gauge theory, namely, 1/(2√g)(ε αβστ g μα g νβ F στ a = F μν a , g = det (g μν ). In the case (i) we show that the space-time R x S 3 is not compatible with a self-dual SU(2) gauge theory, but in the case (ii) the condition of self-duality is satisfied. The model developed in our work can be considered as a possible way to unification of general relativity and Yang-Mills theories. This means that the gauge theory can be formulated in the close analogy with the general relativity, i.e. the Yang-Mills equations are equivalent to Einstein equations with the right-hand side of a simple form. (authors)

3. White dwarf models of supernovae and cataclysmic variables

International Nuclear Information System (INIS)

Nomoto, K.; Hashimoto, M.

1986-01-01

If the accreting white dwarf increases its mass to the Chandrasekhar mass, it will either explode as a Type I supernova or collapse to form a neutron star. In fact, there is a good agreement between the exploding white dwarf model for Type I supernovae and observations. We describe various types of evolution of accreting white dwarfs as a function of binary parameters (i.e,. composition, mass, and age of the white dwarf, its companion star, and mass accretion rate), and discuss the conditions for the precursors of exploding or collapsing white dwarfs, and their relevance to cataclysmic variables. Particular attention is given to helium star cataclysmics which might be the precursors of some Type I supernovae or ultrashort period x-ray binaries. Finally we present new evolutionary calculations using the updated nuclear reaction rates for the formation of O+Ne+Mg white dwarfs, and discuss the composition structure and their relevance to the model for neon novae. 61 refs., 14 figs

4. Multidecadal Variability in Surface Albedo Feedback Across CMIP5 Models

Science.gov (United States)

Schneider, Adam; Flanner, Mark; Perket, Justin

2018-02-01

Previous studies quantify surface albedo feedback (SAF) in climate change, but few assess its variability on decadal time scales. Using the Coupled Model Intercomparison Project Version 5 (CMIP5) multimodel ensemble data set, we calculate time evolving SAF in multiple decades from surface albedo and temperature linear regressions. Results are meaningful when temperature change exceeds 0.5 K. Decadal-scale SAF is strongly correlated with century-scale SAF during the 21st century. Throughout the 21st century, multimodel ensemble mean SAF increases from 0.37 to 0.42 W m-2 K-1. These results suggest that models' mean decadal-scale SAFs are good estimates of their century-scale SAFs if there is at least 0.5 K temperature change. Persistent SAF into the late 21st century indicates ongoing capacity for Arctic albedo decline despite there being less sea ice. If the CMIP5 multimodel ensemble results are representative of the Earth, we cannot expect decreasing Arctic sea ice extent to suppress SAF in the 21st century.

5. Modeling of water and solute transport under variably saturated conditions: state of the art

International Nuclear Information System (INIS)

Lappala, E.G.

1980-01-01

This paper reviews the equations used in deterministic models of mass and energy transport in variably saturated porous media. Analytic, quasi-analytic, and numerical solution methods to the nonlinear forms of transport equations are discussed with respect to their advantages and limitations. The factors that influence the selection of a modeling method are discussed in this paper; they include the following: (1) the degree of coupling required among the equations describing the transport of liquids, gases, solutes, and energy; (2) the inclusion of an advection term in the equations; (3) the existence of sharp fronts; (4) the degree of nonlinearity and hysteresis in the transport coefficients and boundary conditions; (5) the existence of complex boundaries; and (6) the availability and reliability of data required by the models

6. Systems Engineering Metrics: Organizational Complexity and Product Quality Modeling

Science.gov (United States)

Mog, Robert A.

1997-01-01

Innovative organizational complexity and product quality models applicable to performance metrics for NASA-MSFC's Systems Analysis and Integration Laboratory (SAIL) missions and objectives are presented. An intensive research effort focuses on the synergistic combination of stochastic process modeling, nodal and spatial decomposition techniques, organizational and computational complexity, systems science and metrics, chaos, and proprietary statistical tools for accelerated risk assessment. This is followed by the development of a preliminary model, which is uniquely applicable and robust for quantitative purposes. Exercise of the preliminary model using a generic system hierarchy and the AXAF-I architectural hierarchy is provided. The Kendall test for positive dependence provides an initial verification and validation of the model. Finally, the research and development of the innovation is revisited, prior to peer review. This research and development effort results in near-term, measurable SAIL organizational and product quality methodologies, enhanced organizational risk assessment and evolutionary modeling results, and 91 improved statistical quantification of SAIL productivity interests.

7. Modeling and Simulation of Variable Mass, Flexible Structures

Science.gov (United States)

Tobbe, Patrick A.; Matras, Alex L.; Wilson, Heath E.

2009-01-01

The advent of the new Ares I launch vehicle has highlighted the need for advanced dynamic analysis tools for variable mass, flexible structures. This system is composed of interconnected flexible stages or components undergoing rapid mass depletion through the consumption of solid or liquid propellant. In addition to large rigid body configuration changes, the system simultaneously experiences elastic deformations. In most applications, the elastic deformations are compatible with linear strain-displacement relationships and are typically modeled using the assumed modes technique. The deformation of the system is approximated through the linear combination of the products of spatial shape functions and generalized time coordinates. Spatial shape functions are traditionally composed of normal mode shapes of the system or even constraint modes and static deformations derived from finite element models of the system. Equations of motion for systems undergoing coupled large rigid body motion and elastic deformation have previously been derived through a number of techniques [1]. However, in these derivations, the mode shapes or spatial shape functions of the system components were considered constant. But with the Ares I vehicle, the structural characteristics of the system are changing with the mass of the system. Previous approaches to solving this problem involve periodic updates to the spatial shape functions or interpolation between shape functions based on system mass or elapsed mission time. These solutions often introduce misleading or even unstable numerical transients into the system. Plus, interpolation on a shape function is not intuitive. This paper presents an approach in which the shape functions are held constant and operate on the changing mass and stiffness matrices of the vehicle components. Each vehicle stage or component finite element model is broken into dry structure and propellant models. A library of propellant models is used to describe the

8. Complex groundwater flow systems as traveling agent models

Directory of Open Access Journals (Sweden)

Oliver López Corona

2014-10-01

Full Text Available Analyzing field data from pumping tests, we show that as with many other natural phenomena, groundwater flow exhibits complex dynamics described by 1/f power spectrum. This result is theoretically studied within an agent perspective. Using a traveling agent model, we prove that this statistical behavior emerges when the medium is complex. Some heuristic reasoning is provided to justify both spatial and dynamic complexity, as the result of the superposition of an infinite number of stochastic processes. Even more, we show that this implies that non-Kolmogorovian probability is needed for its study, and provide a set of new partial differential equations for groundwater flow.

9. Quantifying uncertainty due to internal variability using high-resolution regional climate model simulations

Science.gov (United States)

Gutmann, E. D.; Ikeda, K.; Deser, C.; Rasmussen, R.; Clark, M. P.; Arnold, J. R.

2015-12-01

The uncertainty in future climate predictions is as large or larger than the mean climate change signal. As such, any predictions of future climate need to incorporate and quantify the sources of this uncertainty. One of the largest sources comes from the internal, chaotic, variability within the climate system itself. This variability has been approximated using the 30 ensemble members of the Community Earth System Model (CESM) large ensemble. Here we examine the wet and dry end members of this ensemble for cool-season precipitation in the Colorado Rocky Mountains with a set of high-resolution regional climate model simulations. We have used the Weather Research and Forecasting model (WRF) to simulate the periods 1990-2000, 2025-2035, and 2070-2080 on a 4km grid. These simulations show that the broad patterns of change depicted in CESM are inherited by the high-resolution simulations; however, the differences in the height and location of the mountains in the WRF simulation, relative to the CESM simulation, means that the location and magnitude of the precipitation changes are very different. We further show that high-resolution simulations with the Intermediate Complexity Atmospheric Research model (ICAR) predict a similar spatial pattern in the change signal as WRF for these ensemble members. We then use ICAR to examine the rest of the CESM Large Ensemble as well as the uncertainty in the regional climate model due to the choice of physics parameterizations.

10. Landscape structure control on soil CO2 efflux variability in complex terrain: Scaling from point observations to watershed scale fluxes

Science.gov (United States)

Diego A. Riveros-Iregui; Brian L. McGlynn

2009-01-01

We investigated the spatial and temporal variability of soil CO2 efflux across 62 sites of a 393-ha complex watershed of the northern Rocky Mountains. Growing season (83 day) cumulative soil CO2 efflux varied from ~300 to ~2000 g CO2 m-2, depending upon landscape position, with a median of 879.8 g CO2 m-2. Our findings revealed that highest soil CO2 efflux rates were...

11. Synchronization Experiments With A Global Coupled Model of Intermediate Complexity

Science.gov (United States)

Selten, Frank; Hiemstra, Paul; Shen, Mao-Lin

2013-04-01

In the super modeling approach an ensemble of imperfect models are connected through nudging terms that nudge the solution of each model to the solution of all other models in the ensemble. The goal is to obtain a synchronized state through a proper choice of connection strengths that closely tracks the trajectory of the true system. For the super modeling approach to be successful, the connections should be dense and strong enough for synchronization to occur. In this study we analyze the behavior of an ensemble of connected global atmosphere-ocean models of intermediate complexity. All atmosphere models are connected to the same ocean model through the surface fluxes of heat, water and momentum, the ocean is integrated using weighted averaged surface fluxes. In particular we analyze the degree of synchronization between the atmosphere models and the characteristics of the ensemble mean solution. The results are interpreted using a low order atmosphere-ocean toy model.

12. Mathematical approaches for complexity/predictivity trade-offs in complex system models : LDRD final report.

Energy Technology Data Exchange (ETDEWEB)

Goldsby, Michael E.; Mayo, Jackson R.; Bhattacharyya, Arnab (Massachusetts Institute of Technology, Cambridge, MA); Armstrong, Robert C.; Vanderveen, Keith

2008-09-01

The goal of this research was to examine foundational methods, both computational and theoretical, that can improve the veracity of entity-based complex system models and increase confidence in their predictions for emergent behavior. The strategy was to seek insight and guidance from simplified yet realistic models, such as cellular automata and Boolean networks, whose properties can be generalized to production entity-based simulations. We have explored the usefulness of renormalization-group methods for finding reduced models of such idealized complex systems. We have prototyped representative models that are both tractable and relevant to Sandia mission applications, and quantified the effect of computational renormalization on the predictive accuracy of these models, finding good predictivity from renormalized versions of cellular automata and Boolean networks. Furthermore, we have theoretically analyzed the robustness properties of certain Boolean networks, relevant for characterizing organic behavior, and obtained precise mathematical constraints on systems that are robust to failures. In combination, our results provide important guidance for more rigorous construction of entity-based models, which currently are often devised in an ad-hoc manner. Our results can also help in designing complex systems with the goal of predictable behavior, e.g., for cybersecurity.

13. A consensus for the development of a vector model to assess clinical complexity.

Science.gov (United States)

Corazza, Gino Roberto; Klersy, Catherine; Formagnana, Pietro; Lenti, Marco Vincenzo; Padula, Donatella

2017-12-01

The progressive rise in multimorbidity has made management of complex patients one of the most topical and challenging issues in medicine, both in clinical practice and for healthcare organizations. To make this easier, a score of clinical complexity (CC) would be useful. A vector model to evaluate biological and extra-biological (socio-economic, cultural, behavioural, environmental) domains of CC was proposed a few years ago. However, given that the variables that grade each domain had never been defined, this model has never been used in clinical practice. To overcome these limits, a consensus meeting was organised to grade each domain of CC, and to establish the hierarchy of the domains. A one-day consensus meeting consisting of a multi-professional panel of 25 people was held at our Hospital. In a preliminary phase, the proponents selected seven variables as qualifiers for each of the five above-mentioned domains. In the course of the meeting, the panel voted for five variables considered to be the most representative for each domain. Consensus was established with 2/3 agreement, and all variables were dichotomised. Finally, the various domains were parametrized and ranked within a feasible vector model. A Clinical Complexity Index was set up using the chosen variables. All the domains were graphically represented through a vector model: the biological domain was chosen as the most significant (highest slope), followed by the behavioural and socio-economic domains (intermediate slope), and lastly by the cultural and environmental ones (lowest slope). A feasible and comprehensive tool to evaluate CC in clinical practice is proposed herein.

14. ANS main control complex three-dimensional computer model development

International Nuclear Information System (INIS)

Cleaves, J.E.; Fletcher, W.M.

1993-01-01

A three-dimensional (3-D) computer model of the Advanced Neutron Source (ANS) main control complex is being developed. The main control complex includes the main control room, the technical support center, the materials irradiation control room, computer equipment rooms, communications equipment rooms, cable-spreading rooms, and some support offices and breakroom facilities. The model will be used to provide facility designers and operations personnel with capabilities for fit-up/interference analysis, visual ''walk-throughs'' for optimizing maintain-ability, and human factors and operability analyses. It will be used to determine performance design characteristics, to generate construction drawings, and to integrate control room layout, equipment mounting, grounding equipment, electrical cabling, and utility services into ANS building designs. This paper describes the development of the initial phase of the 3-D computer model for the ANS main control complex and plans for its development and use

15. Nostradamus 2014 prediction, modeling and analysis of complex systems

CERN Document Server

Suganthan, Ponnuthurai; Chen, Guanrong; Snasel, Vaclav; Abraham, Ajith; Rössler, Otto

2014-01-01

The prediction of behavior of complex systems, analysis and modeling of its structure is a vitally important problem in engineering, economy and generally in science today. Examples of such systems can be seen in the world around us (including our bodies) and of course in almost every scientific discipline including such “exotic” domains as the earth’s atmosphere, turbulent fluids, economics (exchange rate and stock markets), population growth, physics (control of plasma), information flow in social networks and its dynamics, chemistry and complex networks. To understand such complex dynamics, which often exhibit strange behavior, and to use it in research or industrial applications, it is paramount to create its models. For this purpose there exists a rich spectrum of methods, from classical such as ARMA models or Box Jenkins method to modern ones like evolutionary computation, neural networks, fuzzy logic, geometry, deterministic chaos amongst others. This proceedings book is a collection of accepted ...

16. Soil temperature variability in complex terrain measured using fiber-optic distributed temperature sensing

Science.gov (United States)

Soil temperature (Ts) exerts critical controls on hydrologic and biogeochemical processes but magnitude and nature of Ts variability in a landscape setting are rarely documented. Fiber optic distributed temperature sensing systems (FO-DTS) potentially measure Ts at high density over a large extent. ...

17. The effects of model and data complexity on predictions from species distributions models

DEFF Research Database (Denmark)

García-Callejas, David; Bastos, Miguel

2016-01-01

How complex does a model need to be to provide useful predictions is a matter of continuous debate across environmental sciences. In the species distributions modelling literature, studies have demonstrated that more complex models tend to provide better fits. However, studies have also shown...... that predictive performance does not always increase with complexity. Testing of species distributions models is challenging because independent data for testing are often lacking, but a more general problem is that model complexity has never been formally described in such studies. Here, we systematically...

18. Modeling Variable Phanerozoic Oxygen Effects on Physiology and Evolution.

Science.gov (United States)

Graham, Jeffrey B; Jew, Corey J; Wegner, Nicholas C

2016-01-01

Geochemical approximation of Earth's atmospheric O2 level over geologic time prompts hypotheses linking hyper- and hypoxic atmospheres to transformative events in the evolutionary history of the biosphere. Such correlations, however, remain problematic due to the relative imprecision of the timing and scope of oxygen change and the looseness of its overlay on the chronology of key biotic events such as radiations, evolutionary innovation, and extinctions. There are nevertheless general attributions of atmospheric oxygen concentration to key evolutionary changes among groups having a primary dependence upon oxygen diffusion for respiration. These include the occurrence of Devonian hypoxia and the accentuation of air-breathing dependence leading to the origin of vertebrate terrestriality, the occurrence of Carboniferous-Permian hyperoxia and the major radiation of early tetrapods and the origins of insect flight and gigantism, and the Mid-Late Permian oxygen decline accompanying the Permian extinction. However, because of variability between and error within different atmospheric models, there is little basis for postulating correlations outside the Late Paleozoic. Other problems arising in the correlation of paleo-oxygen with significant biological events include tendencies to ignore the role of blood pigment affinity modulation in maintaining homeostasis, the slow rates of O2 change that would have allowed for adaptation, and significant respiratory and circulatory modifications that can and do occur without changes in atmospheric oxygen. The purpose of this paper is thus to refocus thinking about basic questions central to the biological and physiological implications of O2 change over geological time.

19. Stochastic transport models for mixing in variable-density turbulence

Science.gov (United States)

Bakosi, J.; Ristorcelli, J. R.

2011-11-01

In variable-density (VD) turbulent mixing, where very-different- density materials coexist, the density fluctuations can be an order of magnitude larger than their mean. Density fluctuations are non-negligible in the inertia terms of the Navier-Stokes equation which has both quadratic and cubic nonlinearities. Very different mixing rates of different materials give rise to large differential accelerations and some fundamentally new physics that is not seen in constant-density turbulence. In VD flows material mixing is active in a sense far stronger than that applied in the Boussinesq approximation of buoyantly-driven flows: the mass fraction fluctuations are coupled to each other and to the fluid momentum. Statistical modeling of VD mixing requires accounting for basic constraints that are not important in the small-density-fluctuation passive-scalar-mixing approximation: the unit-sum of mass fractions, bounded sample space, and the highly skewed nature of the probability densities become essential. We derive a transport equation for the joint probability of mass fractions, equivalent to a system of stochastic differential equations, that is consistent with VD mixing in multi-component turbulence and consistently reduces to passive scalar mixing in constant-density flows.

20. Modeling geophysical complexity: a case for geometric determinism

Directory of Open Access Journals (Sweden)

C. E. Puente

2007-01-01

Full Text Available It has been customary in the last few decades to employ stochastic models to represent complex data sets encountered in geophysics, particularly in hydrology. This article reviews a deterministic geometric procedure to data modeling, one that represents whole data sets as derived distributions of simple multifractal measures via fractal functions. It is shown how such a procedure may lead to faithful holistic representations of existing geophysical data sets that, while complementing existing representations via stochastic methods, may also provide a compact language for geophysical complexity. The implications of these ideas, both scientific and philosophical, are stressed.

1. Comparing and improving proper orthogonal decomposition (POD) to reduce the complexity of groundwater models

Science.gov (United States)

Gosses, Moritz; Nowak, Wolfgang; Wöhling, Thomas

2017-04-01

Physically-based modeling is a wide-spread tool in understanding and management of natural systems. With the high complexity of many such models and the huge amount of model runs necessary for parameter estimation and uncertainty analysis, overall run times can be prohibitively long even on modern computer systems. An encouraging strategy to tackle this problem are model reduction methods. In this contribution, we compare different proper orthogonal decomposition (POD, Siade et al. (2010)) methods and their potential applications to groundwater models. The POD method performs a singular value decomposition on system states as simulated by the complex (e.g., PDE-based) groundwater model taken at several time-steps, so-called snapshots. The singular vectors with the highest information content resulting from this decomposition are then used as a basis for projection of the system of model equations onto a subspace of much lower dimensionality than the original complex model, thereby greatly reducing complexity and accelerating run times. In its original form, this method is only applicable to linear problems. Many real-world groundwater models are non-linear, tough. These non-linearities are introduced either through model structure (unconfined aquifers) or boundary conditions (certain Cauchy boundaries, like rivers with variable connection to the groundwater table). To date, applications of POD focused on groundwater models simulating pumping tests in confined aquifers with constant head boundaries. In contrast, POD model reduction either greatly looses accuracy or does not significantly reduce model run time if the above-mentioned non-linearities are introduced. We have also found that variable Dirichlet boundaries are problematic for POD model reduction. An extension to the POD method, called POD-DEIM, has been developed for non-linear groundwater models by Stanko et al. (2016). This method uses spatial interpolation points to build the equation system in the

2. Changes in the Complexity of Heart Rate Variability with Exercise Training Measured by Multiscale Entropy-Based Measurements

Directory of Open Access Journals (Sweden)

Frederico Sassoli Fazan

2018-01-01

Full Text Available Quantifying complexity from heart rate variability (HRV series is a challenging task, and multiscale entropy (MSE, along with its variants, has been demonstrated to be one of the most robust approaches to achieve this goal. Although physical training is known to be beneficial, there is little information about the long-term complexity changes induced by the physical conditioning. The present study aimed to quantify the changes in physiological complexity elicited by physical training through multiscale entropy-based complexity measurements. Rats were subject to a protocol of medium intensity training ( n = 13 or a sedentary protocol ( n = 12 . One-hour HRV series were obtained from all conscious rats five days after the experimental protocol. We estimated MSE, multiscale dispersion entropy (MDE and multiscale SDiff q from HRV series. Multiscale SDiff q is a recent approach that accounts for entropy differences between a given time series and its shuffled dynamics. From SDiff q , three attributes (q-attributes were derived, namely SDiff q m a x , q m a x and q z e r o . MSE, MDE and multiscale q-attributes presented similar profiles, except for SDiff q m a x . q m a x showed significant differences between trained and sedentary groups on Time Scales 6 to 20. Results suggest that physical training increases the system complexity and that multiscale q-attributes provide valuable information about the physiological complexity.

3. Variable Width Riparian Model Enhances Landscape and Watershed Condition

Science.gov (United States)

Abood, S. A.; Spencer, L.

2017-12-01

Riparian areas are ecotones that represent about 1% of USFS administered landscape and contribute to numerous valuable ecosystem functions such as wildlife habitat, stream water quality and flows, bank stability and protection against erosion, and values related to diversity, aesthetics and recreation. Riparian zones capture the transitional area between terrestrial and aquatic ecosystems with specific vegetation and soil characteristics which provide critical values/functions and are very responsive to changes in land management activities and uses. Two staff areas at the US Forest Service have coordinated on a two phase project to support the National Forests in their planning revision efforts and to address rangeland riparian business needs at the Forest Plan and Allotment Management Plan levels. The first part of the project will include a national fine scale (USGS HUC-12 digits watersheds) inventory of riparian areas on National Forest Service lands in western United States with riparian land cover, utilizing GIS capabilities and open source geospatial data. The second part of the project will include the application of riparian land cover change and assessment based on selected indicators to assess and monitor riparian areas on annual/5-year cycle basis.This approach recognizes the dynamic and transitional nature of riparian areas by accounting for hydrologic, geomorphic and vegetation data as inputs into the delineation process. The results suggest that incorporating functional variable width riparian mapping within watershed management planning can improve riparian protection and restoration. The application of Riparian Buffer Delineation Model (RBDM) approach can provide the agency Watershed Condition Framework (WCF) with observed riparian area condition on an annual basis and on multiple scales. The use of this model to map moderate to low gradient systems of sufficient width in conjunction with an understanding of the influence of distinctive landscape

4. Deterministic ripple-spreading model for complex networks.

Science.gov (United States)

Hu, Xiao-Bing; Wang, Ming; Leeson, Mark S; Hines, Evor L; Di Paolo, Ezequiel

2011-04-01

This paper proposes a deterministic complex network model, which is inspired by the natural ripple-spreading phenomenon. The motivations and main advantages of the model are the following: (i) The establishment of many real-world networks is a dynamic process, where it is often observed that the influence of a few local events spreads out through nodes, and then largely determines the final network topology. Obviously, this dynamic process involves many spatial and temporal factors. By simulating the natural ripple-spreading process, this paper reports a very natural way to set up a spatial and temporal model for such complex networks. (ii) Existing relevant network models are all stochastic models, i.e., with a given input, they cannot output a unique topology. Differently, the proposed ripple-spreading model can uniquely determine the final network topology, and at the same time, the stochastic feature of complex networks is captured by randomly initializing ripple-spreading related parameters. (iii) The proposed model can use an easily manageable number of ripple-spreading related parameters to precisely describe a network topology, which is more memory efficient when compared with traditional adjacency matrix or similar memory-expensive data structures. (iv) The ripple-spreading model has a very good potential for both extensions and applications.

5. Synthesis, Characterization, and Variable-Temperature NMR Studies of Silver(I) Complexes for Selective Nitrene Transfer.

Science.gov (United States)

Huang, Minxue; Corbin, Joshua R; Dolan, Nicholas S; Fry, Charles G; Vinokur, Anastasiya I; Guzei, Ilia A; Schomaker, Jennifer M

2017-06-05

An array of silver complexes supported by nitrogen-donor ligands catalyze the transformation of C═C and C-H bonds to valuable C-N bonds via nitrene transfer. The ability to achieve high chemoselectivity and site selectivity in an amination event requires an understanding of both the solid- and solution-state behavior of these catalysts. X-ray structural characterizations were helpful in determining ligand features that promote the formation of monomeric versus dimeric complexes. Variable-temperature 1 H and DOSY NMR experiments were especially useful for understanding how the ligand identity influences the nuclearity, coordination number, and fluxional behavior of silver(I) complexes in solution. These insights are valuable for developing improved ligand designs.

6. Trispyrazolylborate Complexes: An Advanced Synthesis Experiment Using Paramagnetic NMR, Variable-Temperature NMR, and EPR Spectroscopies

Science.gov (United States)

Abell, Timothy N.; McCarrick, Robert M.; Bretz, Stacey Lowery; Tierney, David L.

2017-01-01

A structured inquiry experiment for inorganic synthesis has been developed to introduce undergraduate students to advanced spectroscopic techniques including paramagnetic nuclear magnetic resonance and electron paramagnetic resonance. Students synthesize multiple complexes with unknown first row transition metals and identify the unknown metals by…

7. Pathogen burden, co-infection and major histocompatibility complex variability in the European badger (Meles meles)

NARCIS (Netherlands)

Sin, Yung Wa; Annavi, Geetha; Dugdale, Hannah L.; Newman, Chris; Burke, Terry; MacDonald, David W.

2014-01-01

Pathogen-mediated selection is thought to maintain the extreme diversity in the major histocompatibility complex (MHC) genes, operating through the heterozygote advantage, rare-allele advantage and fluctuating selection mechanisms. Heterozygote advantage (i.e. recognizing and binding a wider range

8. Variable Renewable Energy in Long-Term Planning Models: A Multi-Model Perspective

Energy Technology Data Exchange (ETDEWEB)

Cole, Wesley [National Renewable Energy Lab. (NREL), Golden, CO (United States); Frew, Bethany [National Renewable Energy Lab. (NREL), Golden, CO (United States); Mai, Trieu [National Renewable Energy Lab. (NREL), Golden, CO (United States); Sun, Yinong [National Renewable Energy Lab. (NREL), Golden, CO (United States); Bistline, John [Electric Power Research Inst. (EPRI), Knoxville, TN (United States); Blanford, Geoffrey [Electric Power Research Inst. (EPRI), Knoxville, TN (United States); Young, David [Electric Power Research Inst. (EPRI), Knoxville, TN (United States); Marcy, Cara [U.S. Energy Information Administration, Washington, DC (United States); Namovicz, Chris [U.S. Energy Information Administration, Washington, DC (United States); Edelman, Risa [US Environmental Protection Agency (EPA), Washington, DC (United States); Meroney, Bill [US Environmental Protection Agency (EPA), Washington, DC (United States); Sims, Ryan [US Environmental Protection Agency (EPA), Washington, DC (United States); Stenhouse, Jeb [US Environmental Protection Agency (EPA), Washington, DC (United States); Donohoo-Vallett, Paul [Dept. of Energy (DOE), Washington DC (United States)

2017-11-01

Long-term capacity expansion models of the U.S. electricity sector have long been used to inform electric sector stakeholders and decision-makers. With the recent surge in variable renewable energy (VRE) generators — primarily wind and solar photovoltaics — the need to appropriately represent VRE generators in these long-term models has increased. VRE generators are especially difficult to represent for a variety of reasons, including their variability, uncertainty, and spatial diversity. This report summarizes the analyses and model experiments that were conducted as part of two workshops on modeling VRE for national-scale capacity expansion models. It discusses the various methods for treating VRE among four modeling teams from the Electric Power Research Institute (EPRI), the U.S. Energy Information Administration (EIA), the U.S. Environmental Protection Agency (EPA), and the National Renewable Energy Laboratory (NREL). The report reviews the findings from the two workshops and emphasizes the areas where there is still need for additional research and development on analysis tools to incorporate VRE into long-term planning and decision-making. This research is intended to inform the energy modeling community on the modeling of variable renewable resources, and is not intended to advocate for or against any particular energy technologies, resources, or policies.

9. An adaptive sampling method for variable-fidelity surrogate models using improved hierarchical kriging

Science.gov (United States)

Hu, Jiexiang; Zhou, Qi; Jiang, Ping; Shao, Xinyu; Xie, Tingli

2018-01-01

Variable-fidelity (VF) modelling methods have been widely used in complex engineering system design to mitigate the computational burden. Building a VF model generally includes two parts: design of experiments and metamodel construction. In this article, an adaptive sampling method based on improved hierarchical kriging (ASM-IHK) is proposed to refine the improved VF model. First, an improved hierarchical kriging model is developed as the metamodel, in which the low-fidelity model is varied through a polynomial response surface function to capture the characteristics of a high-fidelity model. Secondly, to reduce local approximation errors, an active learning strategy based on a sequential sampling method is introduced to make full use of the already required information on the current sampling points and to guide the sampling process of the high-fidelity model. Finally, two numerical examples and the modelling of the aerodynamic coefficient for an aircraft are provided to demonstrate the approximation capability of the proposed approach, as well as three other metamodelling methods and two sequential sampling methods. The results show that ASM-IHK provides a more accurate metamodel at the same simulation cost, which is very important in metamodel-based engineering design problems.

10. Estimating net present value variability for deterministic models

NARCIS (Netherlands)

van Groenendaal, W.J.H.

1995-01-01

For decision makers the variability in the net present value (NPV) of an investment project is an indication of the project's risk. So-called risk analysis is one way to estimate this variability. However, risk analysis requires knowledge about the stochastic character of the inputs. For large,

11. From complex spatial dynamics to simple Markov chain models: do predators and prey leave footprints?

DEFF Research Database (Denmark)

Nachman, Gøsta Støger; Borregaard, Michael Krabbe

2010-01-01

to another, are then depicted in a state transition diagram, constituting the "footprints" of the underlying population dynamics. We investigate to what extent changes in the population processes modeled in the complex simulation (i.e. the predator's functional response and the dispersal rates of both......In this paper we present a concept for using presence-absence data to recover information on the population dynamics of predator-prey systems. We use a highly complex and spatially explicit simulation model of a predator-prey mite system to generate simple presence-absence data: the number...... of transition probabilities on state variables, and combine this information in a Markov chain transition matrix model. Finally, we use this extended model to predict the long-term dynamics of the system and to reveal its asymptotic steady state properties....

12. Predictive modelling of complex agronomic and biological systems.

Science.gov (United States)

Keurentjes, Joost J B; Molenaar, Jaap; Zwaan, Bas J

2013-09-01

Biological systems are tremendously complex in their functioning and regulation. Studying the multifaceted behaviour and describing the performance of such complexity has challenged the scientific community for years. The reduction of real-world intricacy into simple descriptive models has therefore convinced many researchers of the usefulness of introducing mathematics into biological sciences. Predictive modelling takes such an approach another step further in that it takes advantage of existing knowledge to project the performance of a system in alternating scenarios. The ever growing amounts of available data generated by assessing biological systems at increasingly higher detail provide unique opportunities for future modelling and experiment design. Here we aim to provide an overview of the progress made in modelling over time and the currently prevalent approaches for iterative modelling cycles in modern biology. We will further argue for the importance of versatility in modelling approaches, including parameter estimation, model reduction and network reconstruction. Finally, we will discuss the difficulties in overcoming the mathematical interpretation of in vivo complexity and address some of the future challenges lying ahead. © 2013 John Wiley & Sons Ltd.

13. The complexity of millennial-scale variability in southwestern Europe during MIS 11

Science.gov (United States)

Oliveira, Dulce; Desprat, Stéphanie; Rodrigues, Teresa; Naughton, Filipa; Hodell, David; Trigo, Ricardo; Rufino, Marta; Lopes, Cristina; Abrantes, Fátima; Sánchez Goñi, Maria Fernanda

2016-11-01

Climatic variability of Marine Isotope Stage (MIS) 11 is examined using a new high-resolution direct land-sea comparison from the SW Iberian margin Site U1385. This study, based on pollen and biomarker analyses, documents regional vegetation, terrestrial climate and sea surface temperature (SST) variability. Suborbital climate variability is revealed by a series of forest decline events suggesting repeated cooling and drying episodes in SW Iberia throughout MIS 11. Only the most severe events on land are coeval with SST decreases, under larger ice volume conditions. Our study shows that the diverse expression (magnitude, character and duration) of the millennial-scale cooling events in SW Europe relies on atmospheric and oceanic processes whose predominant role likely depends on baseline climate states. Repeated atmospheric shifts recalling the positive North Atlantic Oscillation mode, inducing dryness in SW Iberia without systematical SST changes, would prevail during low ice volume conditions. In contrast, disruption of the Atlantic meridional overturning circulation (AMOC), related to iceberg discharges, colder SST and increased hydrological regime, would be responsible for the coldest and driest episodes of prolonged duration in SW Europe.

14. Assessing Complexity in Learning Outcomes--A Comparison between the SOLO Taxonomy and the Model of Hierarchical Complexity

Science.gov (United States)

Stålne, Kristian; Kjellström, Sofia; Utriainen, Jukka

2016-01-01

An important aspect of higher education is to educate students who can manage complex relationships and solve complex problems. Teachers need to be able to evaluate course content with regard to complexity, as well as evaluate students' ability to assimilate complex content and express it in the form of a learning outcome. One model for evaluating…

15. Modelling, Estimation and Control of Networked Complex Systems

CERN Document Server

Chiuso, Alessandro; Frasca, Mattia; Rizzo, Alessandro; Schenato, Luca; Zampieri, Sandro

2009-01-01

The paradigm of complexity is pervading both science and engineering, leading to the emergence of novel approaches oriented at the development of a systemic view of the phenomena under study; the definition of powerful tools for modelling, estimation, and control; and the cross-fertilization of different disciplines and approaches. This book is devoted to networked systems which are one of the most promising paradigms of complexity. It is demonstrated that complex, dynamical networks are powerful tools to model, estimate, and control many interesting phenomena, like agent coordination, synchronization, social and economics events, networks of critical infrastructures, resources allocation, information processing, or control over communication networks. Moreover, it is shown how the recent technological advances in wireless communication and decreasing in cost and size of electronic devices are promoting the appearance of large inexpensive interconnected systems, each with computational, sensing and mobile cap...

16. Infinite Multiple Membership Relational Modeling for Complex Networks

DEFF Research Database (Denmark)

Mørup, Morten; Schmidt, Mikkel Nørgaard; Hansen, Lars Kai

Learning latent structure in complex networks has become an important problem fueled by many types of networked data originating from practically all fields of science. In this paper, we propose a new non-parametric Bayesian multiplemembership latent feature model for networks. Contrary to existing...... multiplemembership models that scale quadratically in the number of vertices the proposedmodel scales linearly in the number of links admittingmultiple-membership analysis in large scale networks. We demonstrate a connection between the single membership relational model and multiple membership models and show...

17. Modeling data irregularities and structural complexities in data envelopment analysis

CERN Document Server

Zhu, Joe

2007-01-01

In a relatively short period of time, Data Envelopment Analysis (DEA) has grown into a powerful quantitative, analytical tool for measuring and evaluating performance. It has been successfully applied to a whole variety of problems in many different contexts worldwide. This book deals with the micro aspects of handling and modeling data issues in modeling DEA problems. DEA's use has grown with its capability of dealing with complex "service industry" and the "public service domain" types of problems that require modeling of both qualitative and quantitative data. This handbook treatment deals with specific data problems including: imprecise or inaccurate data; missing data; qualitative data; outliers; undesirable outputs; quality data; statistical analysis; software and other data aspects of modeling complex DEA problems. In addition, the book will demonstrate how to visualize DEA results when the data is more than 3-dimensional, and how to identify efficiency units quickly and accurately.

18. Modeling the propagation of mobile malware on complex networks

Science.gov (United States)

Liu, Wanping; Liu, Chao; Yang, Zheng; Liu, Xiaoyang; Zhang, Yihao; Wei, Zuxue

2016-08-01

In this paper, the spreading behavior of malware across mobile devices is addressed. By introducing complex networks to model mobile networks, which follows the power-law degree distribution, a novel epidemic model for mobile malware propagation is proposed. The spreading threshold that guarantees the dynamics of the model is calculated. Theoretically, the asymptotic stability of the malware-free equilibrium is confirmed when the threshold is below the unity, and the global stability is further proved under some sufficient conditions. The influences of different model parameters as well as the network topology on malware propagation are also analyzed. Our theoretical studies and numerical simulations show that networks with higher heterogeneity conduce to the diffusion of malware, and complex networks with lower power-law exponents benefit malware spreading.

19. Application of soft computing based hybrid models in hydrological variables modeling: a comprehensive review

Science.gov (United States)

Fahimi, Farzad; Yaseen, Zaher Mundher; El-shafie, Ahmed

2017-05-01

Since the middle of the twentieth century, artificial intelligence (AI) models have been used widely in engineering and science problems. Water resource variable modeling and prediction are the most challenging issues in water engineering. Artificial neural network (ANN) is a common approach used to tackle this problem by using viable and efficient models. Numerous ANN models have been successfully developed to achieve more accurate results. In the current review, different ANN models in water resource applications and hydrological variable predictions are reviewed and outlined. In addition, recent hybrid models and their structures, input preprocessing, and optimization techniques are discussed and the results are compared with similar previous studies. Moreover, to achieve a comprehensive view of the literature, many articles that applied ANN models together with other techniques are included. Consequently, coupling procedure, model evaluation, and performance comparison of hybrid models with conventional ANN models are assessed, as well as, taxonomy and hybrid ANN models structures. Finally, current challenges and recommendations for future researches are indicated and new hybrid approaches are proposed.

20. Modelling and simulating in-stent restenosis with complex automata

NARCIS (Netherlands)

Hoekstra, A.G.; Lawford, P.; Hose, R.

2010-01-01

In-stent restenosis, the maladaptive response of a blood vessel to injury caused by the deployment of a stent, is a multiscale system involving a large number of biological and physical processes. We describe a Complex Automata Model for in-stent restenosis, coupling bulk flow, drug diffusion, and

1. The Complexity of Developmental Predictions from Dual Process Models

Science.gov (United States)

Stanovich, Keith E.; West, Richard F.; Toplak, Maggie E.

2011-01-01

Drawing developmental predictions from dual-process theories is more complex than is commonly realized. Overly simplified predictions drawn from such models may lead to premature rejection of the dual process approach as one of many tools for understanding cognitive development. Misleading predictions can be avoided by paying attention to several…

2. Constructive Lower Bounds on Model Complexity of Shallow Perceptron Networks

Czech Academy of Sciences Publication Activity Database

Kůrková, Věra

2018-01-01

Roč. 29, č. 7 (2018), s. 305-315 ISSN 0941-0643 R&D Projects: GA ČR GA15-18108S Institutional support: RVO:67985807 Keywords : shallow and deep networks * model complexity and sparsity * signum perceptron networks * finite mappings * variational norms * Hadamard matrices Subject RIV: IN - Informatics, Computer Science Impact factor: 2.505, year: 2016

3. Complexity effects in choice experiments-based models

NARCIS (Netherlands)

Dellaert, B.G.C.; Donkers, B.; van Soest, A.H.O.

2012-01-01

Many firms rely on choice experiment–based models to evaluate future marketing actions under various market conditions. This research investigates choice complexity (i.e., number of alternatives, number of attributes, and utility similarity between the most attractive alternatives) and individual

4. Kolmogorov complexity, pseudorandom generators and statistical models testing

Czech Academy of Sciences Publication Activity Database

Šindelář, Jan; Boček, Pavel

2002-01-01

Roč. 38, č. 6 (2002), s. 747-759 ISSN 0023-5954 R&D Projects: GA ČR GA102/99/1564 Institutional research plan: CEZ:AV0Z1075907 Keywords : Kolmogorov complexity * pseudorandom generators * statistical models testing Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.341, year: 2002

5. Model-based safety architecture framework for complex systems

NARCIS (Netherlands)

Schuitemaker, Katja; Rajabali Nejad, Mohammadreza; Braakhuis, J.G.; Podofillini, Luca; Sudret, Bruno; Stojadinovic, Bozidar; Zio, Enrico; Kröger, Wolfgang

2015-01-01

The shift to transparency and rising need of the general public for safety, together with the increasing complexity and interdisciplinarity of modern safety-critical Systems of Systems (SoS) have resulted in a Model-Based Safety Architecture Framework (MBSAF) for capturing and sharing architectural

6. On the general procedure for modelling complex ecological systems

International Nuclear Information System (INIS)

He Shanyu.

1987-12-01

In this paper, the principle of a general procedure for modelling complex ecological systems, i.e. the Adaptive Superposition Procedure (ASP) is shortly stated. The result of application of ASP in a national project for ecological regionalization is also described. (author). 3 refs

7. The dynamic complexity of a three species food chain model

International Nuclear Information System (INIS)

Lv Songjuan; Zhao Min

2008-01-01

In this paper, a three-species food chain model is analytically investigated on theories of ecology and using numerical simulation. Bifurcation diagrams are obtained for biologically feasible parameters. The results show that the system exhibits rich complexity features such as stable, periodic and chaotic dynamics

8. Interannual Tropical Rainfall Variability in General Circulation Model Simulations Associated with the Atmospheric Model Intercomparison Project.

Science.gov (United States)

Sperber, K. R.; Palmer, T. N.

1996-11-01

The interannual variability of rainfall over the Indian subcontinent, the African Sahel, and the Nordeste region of Brazil have been evaluated in 32 models for the period 1979-88 as part of the Atmospheric Model Intercomparison Project (AMIP). The interannual variations of Nordeste rainfall are the most readily captured, owing to the intimate link with Pacific and Atlantic sea surface temperatures. The precipitation variations over India and the Sahel are less well simulated. Additionally, an Indian monsoon wind shear index was calculated for each model. Evaluation of the interannual variability of a wind shear index over the summer monsoon region indicates that the models exhibit greater fidelity in capturing the large-scale dynamic fluctuations than the regional-scale rainfall variations. A rainfall/SST teleconnection quality control was used to objectively stratify model performance. Skill scores improved for those models that qualitatively simulated the observed rainfall/El Niño- Southern Oscillation SST correlation pattern. This subset of models also had a rainfall climatology that was in better agreement with observations, indicating a link between systematic model error and the ability to simulate interannual variations.A suite of six European Centre for Medium-Range Weather Forecasts (ECMWF) AMIP runs (differing only in their initial conditions) have also been examined. As observed, all-India rainfall was enhanced in 1988 relative to 1987 in each of these realizations. All-India rainfall variability during other years showed little or no predictability, possibly due to internal chaotic dynamics associated with intraseasonal monsoon fluctuations and/or unpredictable land surface process interactions. The interannual variations of Nordeste rainfall were best represented. The State University of New York at Albany/National Center for Atmospheric Research Genesis model was run in five initial condition realizations. In this model, the Nordeste rainfall

9. Modeling Stochastic Complexity in Complex Adaptive Systems: Non-Kolmogorov Probability and the Process Algebra Approach.

Science.gov (United States)

Sulis, William H

2017-10-01

Walter Freeman III pioneered the application of nonlinear dynamical systems theories and methodologies in his work on mesoscopic brain dynamics.Sadly, mainstream psychology and psychiatry still cling to linear correlation based data analysis techniques, which threaten to subvert the process of experimentation and theory building. In order to progress, it is necessary to develop tools capable of managing the stochastic complexity of complex biopsychosocial systems, which includes multilevel feedback relationships, nonlinear interactions, chaotic dynamics and adaptability. In addition, however, these systems exhibit intrinsic randomness, non-Gaussian probability distributions, non-stationarity, contextuality, and non-Kolmogorov probabilities, as well as the absence of mean and/or variance and conditional probabilities. These properties and their implications for statistical analysis are discussed. An alternative approach, the Process Algebra approach, is described. It is a generative model, capable of generating non-Kolmogorov probabilities. It has proven useful in addressing fundamental problems in quantum mechanics and in the modeling of developing psychosocial systems.

10. Major histocompatibility complex class III genes and susceptibility to immunoglobulin A deficiency and common variable immunodeficiency.

OpenAIRE

Volanakis, J E; Zhu, Z B; Schaffer, F M; Macon, K J; Palermos, J; Barger, B O; Go, R; Campbell, R D; Schroeder, H W; Cooper, M D

1992-01-01

We have proposed that significant subsets of individuals with IgA deficiency (IgA-D) and common variable immunodeficiency (CVID) may represent polar ends of a clinical spectrum reflecting a single underlying genetic defect. This proposal was supported by our finding that individuals with these immunodeficiencies have in common a high incidence of C4A gene deletions and C2 rare gene alleles. Here we present our analysis of the MHC haplotypes of 12 IgA-D and 19 CVID individuals from 21 families...

11. Some elements of a theory of multidimensional complex variables. I - General theory. II - Expansions of analytic functions and application to fluid flows

Science.gov (United States)

Martin, E. Dale

1989-01-01

The paper introduces a new theory of N-dimensional complex variables and analytic functions which, for N greater than 2, is both a direct generalization and a close analog of the theory of ordinary complex variables. The algebra in the present theory is a commutative ring, not a field. Functions of a three-dimensional variable were defined and the definition of the derivative then led to analytic functions.

12. The Integration of Continuous and Discrete Latent Variable Models: Potential Problems and Promising Opportunities

Science.gov (United States)

Bauer, Daniel J.; Curran, Patrick J.

2004-01-01

Structural equation mixture modeling (SEMM) integrates continuous and discrete latent variable models. Drawing on prior research on the relationships between continuous and discrete latent variable models, the authors identify 3 conditions that may lead to the estimation of spurious latent classes in SEMM: misspecification of the structural model,…

13. Selecting candidate predictor variables for the modelling of post ...

African Journals Online (AJOL)

Objectives: The objective of this project was to determine the variables most likely to be associated with post- .... (as defined subjectively by the research team) in global .... ed on their lack of knowledge of wealth scoring tools. ... HIV serology.

14. Approaches for modeling within subject variability in pharmacometric count data analysis: dynamic inter-occasion variability and stochastic differential equations.

Science.gov (United States)

Deng, Chenhui; Plan, Elodie L; Karlsson, Mats O

2016-06-01

Parameter variation in pharmacometric analysis studies can be characterized as within subject parameter variability (WSV) in pharmacometric models. WSV has previously been successfully modeled using inter-occasion variability (IOV), but also stochastic differential equations (SDEs). In this study, two approaches, dynamic inter-occasion variability (dIOV) and adapted stochastic differential equations, were proposed to investigate WSV in pharmacometric count data analysis. These approaches were applied to published count models for seizure counts and Likert pain scores. Both approaches improved the model fits significantly. In addition, stochastic simulation and estimation were used to explore further the capability of the two approaches to diagnose and improve models where existing WSV is not recognized. The results of simulations confirmed the gain in introducing WSV as dIOV and SDEs when parameters vary randomly over time. Further, the approaches were also informative as diagnostics of model misspecification, when parameters changed systematically over time but this was not recognized in the structural model. The proposed approaches in this study offer strategies to characterize WSV and are not restricted to count data.

15. Modelling for Fuel Optimal Control of a Variable Compression Engine

OpenAIRE

Nilsson, Ylva

2007-01-01

Variable compression engines are a mean to meet the demand on lower fuel consumption. A high compression ratio results in high engine efficiency, but also increases the knock tendency. On conventional engines with fixed compression ratio, knock is avoided by retarding the ignition angle. The variable compression engine offers an extra dimension in knock control, since both ignition angle and compression ratio can be adjusted. The central question is thus for what combination of compression ra...

16. Exact solutions to a nonlinear dispersive model with variable coefficients

International Nuclear Information System (INIS)

Yin Jun; Lai Shaoyong; Qing Yin

2009-01-01

A mathematical technique based on an auxiliary differential equation and the symbolic computation system Maple is employed to investigate a prototypical and nonlinear K(n, n) equation with variable coefficients. The exact solutions to the equation are constructed analytically under various circumstances. It is shown that the variable coefficients and the exponent appearing in the equation determine the quantitative change in the physical structures of the solutions.

17. Modeling and designing of variable-period and variable-pole-number undulator

Directory of Open Access Journals (Sweden)

I. Davidyuk

2016-02-01

Full Text Available The concept of permanent-magnet variable-period undulator (VPU was proposed several years ago and has found few implementations so far. The VPUs have some advantages as compared with conventional undulators, e.g., a wider range of radiation wavelength tuning and the option to increase the number of poles for shorter periods. Both these advantages will be realized in the VPU under development now at Budker INP. In this paper, we present the results of 2D and 3D magnetic field simulations and discuss some design features of this VPU.

18. Complex analyses of inverted repeats in mitochondrial genomes revealed their importance and variability.

Science.gov (United States)

Cechová, Jana; Lýsek, Jirí; Bartas, Martin; Brázda, Václav

2018-04-01

The NCBI database contains mitochondrial DNA (mtDNA) genomes from numerous species. We investigated the presence and locations of inverted repeat sequences (IRs) in these mtDNA sequences, which are known to be important for regulating nuclear genomes. IRs were identified in mtDNA in all species. IR lengths and frequencies correlate with evolutionary age and the greatest variability was detected in subgroups of plants and fungi and the lowest variability in mammals. IR presence is non-random and evolutionary favoured. The frequency of IRs generally decreased with IR length, but not for IRs 24 or 30 bp long, which are 1.5 times more abundant. IRs are enriched in sequences from the replication origin, followed by D-loop, stem-loop and miscellaneous sequences, pointing to the importance of IRs in regulatory regions of mitochondrial DNA. Data were produced using Palindrome analyser, freely available on the web at http://bioinformatics.ibp.cz. vaclav@ibp.cz. Supplementary data are available at Bioinformatics online.

19. A state-and-transition simulation modeling approach for estimating the historical range of variability

Directory of Open Access Journals (Sweden)

Kori Blankenship

2015-04-01

Full Text Available Reference ecological conditions offer important context for land managers as they assess the condition of their landscapes and provide benchmarks for desired future conditions. State-and-transition simulation models (STSMs are commonly used to estimate reference conditions that can be used to evaluate current ecosystem conditions and to guide land management decisions and activities. The LANDFIRE program created more than 1,000 STSMs and used them to assess departure from a mean reference value for ecosystems in the United States. While the mean provides a useful benchmark, land managers and researchers are often interested in the range of variability around the mean. This range, frequently referred to as the historical range of variability (HRV, offers model users improved understanding of ecosystem function, more information with which to evaluate ecosystem change and potentially greater flexibility in management options. We developed a method for using LANDFIRE STSMs to estimate the HRV around the mean reference condition for each model state in ecosystems by varying the fire probabilities. The approach is flexible and can be adapted for use in a variety of ecosystems. HRV analysis can be combined with other information to help guide complex land management decisions.

20. Using Enthalpy as a Prognostic Variable in Atmospheric Modelling with Variable Composition

Science.gov (United States)

2016-04-14

Sela, personal communication, 2005). These terms are also routinely neglected in models. In models with a limited number of gaseous tracers, such as...so-called energy- exchange term (second term on the left- hand side) in Equation (5). The finite-difference schemes in existing atmospheric models have...equation for the sum of enthalpy and kinetic energy of horizontal motion is solved. This eliminates the energy- exchange term and automatically

1. Variability of LD50 Values from Rat Oral Acute Toxicity Studies: Implications for Alternative Model Development

Science.gov (United States)

Alternative models developed for estimating acute systemic toxicity are generally evaluated using in vivo LD50 values. However, in vivo acute systemic toxicity studies can produce variable results, even when conducted according to accepted test guidelines. This variability can ma...

2. Food Prices and Climate Extremes: A Model of Global Grain Price Variability with Storage

Science.gov (United States)

Otto, C.; Schewe, J.; Frieler, K.

2015-12-01

Extreme climate events such as droughts, floods, or heat waves affect agricultural production in major cropping regions and therefore impact the world market prices of staple crops. In the last decade, crop prices exhibited two very prominent price peaks in 2007-2008 and 2010-2011, threatening food security especially for poorer countries that are net importers of grain. There is evidence that these spikes in grain prices were at least partly triggered by actual supply shortages and the expectation of bad harvests. However, the response of the market to supply shocks is nonlinear and depends on complex and interlinked processes such as warehousing, speculation, and trade policies. Quantifying the contributions of such different factors to short-term price variability remains difficult, not least because many existing models ignore the role of storage which becomes important on short timescales. This in turn impedes the assessment of future climate change impacts on food prices. Here, we present a simple model of annual world grain prices that integrates grain stocks into the supply and demand functions. This firstly allows us to model explicitly the effect of storage strategies on world market price, and thus, for the first time, to quantify the potential contribution of trade policies to price variability in a simple global framework. Driven only by reported production and by long--term demand trends of the past ca. 40 years, the model reproduces observed variations in both the global storage volume and price of wheat. We demonstrate how recent price peaks can be reproduced by accounting for documented changes in storage strategies and trade policies, contrasting and complementing previous explanations based on different mechanisms such as speculation. Secondly, we show how the integration of storage allows long-term projections of grain price variability under climate change, based on existing crop yield scenarios.

3. Rethinking the Psychogenic Model of Complex Regional Pain Syndrome: Somatoform Disorders and Complex Regional Pain Syndrome

Science.gov (United States)

Hill, Renee J.; Chopra, Pradeep; Richardi, Toni

2012-01-01

Abstract Explaining the etiology of Complex Regional Pain Syndrome (CRPS) from the psychogenic model is exceedingly unsophisticated, because neurocognitive deficits, neuroanatomical abnormalities, and distortions in cognitive mapping are features of CRPS pathology. More importantly, many people who have developed CRPS have no history of mental illness. The psychogenic model offers comfort to physicians and mental health practitioners (MHPs) who have difficulty understanding pain maintained by newly uncovered neuro inflammatory processes. With increased education about CRPS through a biopsychosocial perspective, both physicians and MHPs can better diagnose, treat, and manage CRPS symptomatology. PMID:24223338

4. ASYMMETRIC PRICE TRANSMISSION MODELING: THE IMPORTANCE OF MODEL COMPLEXITY AND THE PERFORMANCE OF THE SELECTION CRITERIA

Directory of Open Access Journals (Sweden)

Henry de-Graft Acquah

2013-01-01

Full Text Available Information Criteria provides an attractive basis for selecting the best model from a set of competing asymmetric price transmission models or theories. However, little is understood about the sensitivity of the model selection methods to model complexity. This study therefore fits competing asymmetric price transmission models that differ in complexity to simulated data and evaluates the ability of the model selection methods to recover the true model. The results of Monte Carlo experimentation suggest that in general BIC, CAIC and DIC were superior to AIC when the true data generating process was the standard error correction model, whereas AIC was more successful when the true model was the complex error correction model. It is also shown that the model selection methods performed better in large samples for a complex asymmetric data generating process than with a standard asymmetric data generating process. Except for complex models, AIC's performance did not make substantial gains in recovery rates as sample size increased. The research findings demonstrate the influence of model complexity in asymmetric price transmission model comparison and selection.

5. Higher genus correlators for the complex matrix model

International Nuclear Information System (INIS)

Ambjorn, J.; Kristhansen, C.F.; Makeenko, Y.M.

1992-01-01

In this paper, the authors describe an iterative scheme which allows us to calculate any multi-loop correlator for the complex matrix model to any genus using only the first in the chain of loop equations. The method works for a completely general potential and the results contain no explicit reference to the couplings. The genus g contribution to the m-loop correlator depends on a finite number of parameters, namely at most 4g - 2 + m. The authors find the generating functional explicitly up to genus three. The authors show as well that the model is equivalent to an external field problem for the complex matrix model with a logarithmic potential

6. Reduced Complexity Volterra Models for Nonlinear System Identification

Directory of Open Access Journals (Sweden)

Hacıoğlu Rıfat

2001-01-01

Full Text Available A broad class of nonlinear systems and filters can be modeled by the Volterra series representation. However, its practical use in nonlinear system identification is sometimes limited due to the large number of parameters associated with the Volterra filter′s structure. The parametric complexity also complicates design procedures based upon such a model. This limitation for system identification is addressed in this paper using a Fixed Pole Expansion Technique (FPET within the Volterra model structure. The FPET approach employs orthonormal basis functions derived from fixed (real or complex pole locations to expand the Volterra kernels and reduce the number of estimated parameters. That the performance of FPET can considerably reduce the number of estimated parameters is demonstrated by a digital satellite channel example in which we use the proposed method to identify the channel dynamics. Furthermore, a gradient-descent procedure that adaptively selects the pole locations in the FPET structure is developed in the paper.

7. Deciphering the complexity of acute inflammation using mathematical models.

Science.gov (United States)

Vodovotz, Yoram

2006-01-01

Various stresses elicit an acute, complex inflammatory response, leading to healing but sometimes also to organ dysfunction and death. We constructed both equation-based models (EBM) and agent-based models (ABM) of various degrees of granularity--which encompass the dynamics of relevant cells, cytokines, and the resulting global tissue dysfunction--in order to begin to unravel these inflammatory interactions. The EBMs describe and predict various features of septic shock and trauma/hemorrhage (including the response to anthrax, preconditioning phenomena, and irreversible hemorrhage) and were used to simulate anti-inflammatory strategies in clinical trials. The ABMs that describe the interrelationship between inflammation and wound healing yielded insights into intestinal healing in necrotizing enterocolitis, vocal fold healing during phonotrauma, and skin healing in the setting of diabetic foot ulcers. Modeling may help in understanding the complex interactions among the components of inflammation and response to stress, and therefore aid in the development of novel therapies and diagnostics.

8. Nonlinear model of epidemic spreading in a complex social network.

Science.gov (United States)

Kosiński, Robert A; Grabowski, A

2007-10-01

The epidemic spreading in a human society is a complex process, which can be described on the basis of a nonlinear mathematical model. In such an approach the complex and hierarchical structure of social network (which has implications for the spreading of pathogens and can be treated as a complex network), can be taken into account. In our model each individual has one of the four permitted states: susceptible, infected, infective, unsusceptible or dead. This refers to the SEIR model used in epidemiology. The state of an individual changes in time, depending on the previous state and the interactions with other individuals. The description of the interpersonal contacts is based on the experimental observations of the social relations in the community. It includes spatial localization of the individuals and hierarchical structure of interpersonal interactions. Numerical simulations were performed for different types of epidemics, giving the progress of a spreading process and typical relationships (e.g. range of epidemic in time, the epidemic curve). The spreading process has a complex and spatially chaotic character. The time dependence of the number of infective individuals shows the nonlinear character of the spreading process. We investigate the influence of the preventive vaccinations on the spreading process. In particular, for a critical value of preventively vaccinated individuals the percolation threshold is observed and the epidemic is suppressed.

9. Elastic Network Model of a Nuclear Transport Complex

Science.gov (United States)

Ryan, Patrick; Liu, Wing K.; Lee, Dockjin; Seo, Sangjae; Kim, Young-Jin; Kim, Moon K.

2010-05-01

The structure of Kap95p was obtained from the Protein Data Bank (www.pdb.org) and analyzed RanGTP plays an important role in both nuclear protein import and export cycles. In the nucleus, RanGTP releases macromolecular cargoes from importins and conversely facilitates cargo binding to exportins. Although the crystal structure of the nuclear import complex formed by importin Kap95p and RanGTP was recently identified, its molecular mechanism still remains unclear. To understand the relationship between structure and function of a nuclear transport complex, a structure-based mechanical model of Kap95p:RanGTP complex is introduced. In this model, a protein structure is simply modeled as an elastic network in which a set of coarse-grained point masses are connected by linear springs representing biochemical interactions at atomic level. Harmonic normal mode analysis (NMA) and anharmonic elastic network interpolation (ENI) are performed to predict the modes of vibrations and a feasible pathway between locked and unlocked conformations of Kap95p, respectively. Simulation results imply that the binding of RanGTP to Kap95p induces the release of the cargo in the nucleus as well as prevents any new cargo from attaching to the Kap95p:RanGTP complex.

10. Entropy, complexity, and Markov diagrams for random walk cancer models.

Science.gov (United States)

Newton, Paul K; Mason, Jeremy; Hurt, Brian; Bethel, Kelly; Bazhenova, Lyudmila; Nieva, Jorge; Kuhn, Peter

2014-12-19

The notion of entropy is used to compare the complexity associated with 12 common cancers based on metastatic tumor distribution autopsy data. We characterize power-law distributions, entropy, and Kullback-Liebler divergence associated with each primary cancer as compared with data for all cancer types aggregated. We then correlate entropy values with other measures of complexity associated with Markov chain dynamical systems models of progression. The Markov transition matrix associated with each cancer is associated with a directed graph model where nodes are anatomical locations where a metastatic tumor could develop, and edge weightings are transition probabilities of progression from site to site. The steady-state distribution corresponds to the autopsy data distribution. Entropy correlates well with the overall complexity of the reduced directed graph structure for each cancer and with a measure of systemic interconnectedness of the graph, called graph conductance. The models suggest that grouping cancers according to their entropy values, with skin, breast, kidney, and lung cancers being prototypical high entropy cancers, stomach, uterine, pancreatic and ovarian being mid-level entropy cancers, and colorectal, cervical, bladder, and prostate cancers being prototypical low entropy cancers, provides a potentially useful framework for viewing metastatic cancer in terms of predictability, complexity, and metastatic potential.

11. BlenX-based compositional modeling of complex reaction mechanisms

Directory of Open Access Journals (Sweden)

Judit Zámborszky

2010-02-01

Full Text Available Molecular interactions are wired in a fascinating way resulting in complex behavior of biological systems. Theoretical modeling provides a useful framework for understanding the dynamics and the function of such networks. The complexity of the biological networks calls for conceptual tools that manage the combinatorial explosion of the set of possible interactions. A suitable conceptual tool to attack complexity is compositionality, already successfully used in the process algebra field to model computer systems. We rely on the BlenX programming language, originated by the beta-binders process calculus, to specify and simulate high-level descriptions of biological circuits. The Gillespie's stochastic framework of BlenX requires the decomposition of phenomenological functions into basic elementary reactions. Systematic unpacking of complex reaction mechanisms into BlenX templates is shown in this study. The estimation/derivation of missing parameters and the challenges emerging from compositional model building in stochastic process algebras are discussed. A biological example on circadian clock is presented as a case study of BlenX compositionality.

12. Dromion-like structures and stability analysis in the variable coefficients complex Ginzburg–Landau equation

International Nuclear Information System (INIS)

Wong, Pring; Pang, Li-Hui; Huang, Long-Gang; Li, Yan-Qing; Lei, Ming; Liu, Wen-Jun

2015-01-01

The study of the complex Ginzburg–Landau equation, which can describe the fiber laser system, is of significance for ultra-fast laser. In this paper, dromion-like structures for the complex Ginzburg–Landau equation are considered due to their abundant nonlinear dynamics. Via the modified Hirota method and simplified assumption, the analytic dromion-like solution is obtained. The partial asymmetry of structure is particularly discussed, which arises from asymmetry of nonlinear and dispersion terms. Furthermore, the stability of dromion-like structures is analyzed. Oscillation structure emerges to exhibit strong interference when the dispersion loss is perturbed. Through the appropriate modulation of modified exponent parameter, the oscillation structure is transformed into two dromion-like structures. It indicates that the dromion-like structure is unstable, and the coherence intensity is affected by the modified exponent parameter. Results in this paper may be useful in accounting for some nonlinear phenomena in fiber laser systems, and understanding the essential role of modified Hirota method

13. Multiscale modeling of complex materials phenomenological, theoretical and computational aspects

CERN Document Server

Trovalusci, Patrizia

2014-01-01

The papers in this volume deal with materials science, theoretical mechanics and experimental and computational techniques at multiple scales, providing a sound base and a framework for many applications which are hitherto treated in a phenomenological sense. The basic principles are formulated of multiscale modeling strategies towards modern complex multiphase materials subjected to various types of mechanical, thermal loadings and environmental effects. The focus is on problems where mechanics is highly coupled with other concurrent physical phenomena. Attention is also focused on the historical origins of multiscale modeling and foundations of continuum mechanics currently adopted to model non-classical continua with substructure, for which internal length scales play a crucial role.

14. Complex UV/X-ray variability of 1H 0707-495

Science.gov (United States)

Pawar, P. K.; Dewangan, G. C.; Papadakis, I. E.; Patil, M. K.; Pal, Main; Kembhavi, A. K.

2017-12-01

We study the relationship between the UV and X-ray variability of the narrow-line Seyfert 1 galaxy 1H 0707-495. Using a year-long Swift monitoring and four long XMM-Newton observations, we perform cross-correlation analyses of the UV and X-ray light curves, on both long and short time-scales. We also perform time-resolved X-ray spectroscopy on 1-2 ks scale, and study the relationship between the UV emission and the X-ray spectral components - soft X-ray excess and a power law. We find that the UV and X-ray variations anticorrelate on short, and possibly on long time-scales as well. Our results rule out reprocessing as the dominant mechanism for the UV variability, as well as the inward propagating fluctuations in the accretion rate. Absence of a positive correlation between the photon index and the UV flux suggests that the observed UV emission is unlikely to be the seed photons for the thermal Comptonization. We find a strong correlation between the continuum flux and the soft-excess temperature which implies that the soft excess is most likely the reprocessed X-ray emission in the inner accretion disc. Strong X-ray heating of the innermost regions in the disc, due to gravitational light bending, appears to be an important effect in 1H 0707-495, giving rise to a significant fraction of the soft excess as reprocessed thermal emission. We also find indications for a non-static, dynamic X-ray corona, where either the size or height (or both) vary with time.

15. Modelling of complex heat transfer systems by the coupling method

Energy Technology Data Exchange (ETDEWEB)

Bacot, P.; Bonfils, R.; Neveu, A.; Ribuot, J. (Centre d' Energetique de l' Ecole des Mines de Paris, 75 (France))

1985-04-01

The coupling method proposed here is designed to reduce the size of matrices which appear in the modelling of heat transfer systems. It consists in isolating the elements that can be modelled separately, and among the input variables of a component, identifying those which will couple it to another component. By grouping these types of variable, one can thus identify a so-called coupling matrix of reduced size, and relate it to the overall system. This matrix allows the calculation of the coupling temperatures as a function of external stresses, and of the state of the overall system at the previous instant. The internal temperatures of the components are determined from for previous ones. Two examples of applications are presented, one concerning a dwelling unit, and the second a solar water heater.

16. Discrete model of opinion changes using knowledge and emotions as control variables.

Science.gov (United States)

Sobkowicz, Pawel

2012-01-01

We present a new model of opinion changes dependent on the agents emotional state and their information about the issue in question. Our goal is to construct a simple, yet nontrivial and flexible representation of individual attitude dynamics for agent based simulations, that could be used in a variety of social environments. The model is a discrete version of the cusp catastrophe model of opinion dynamics in which information is treated as the normal factor while emotional arousal (agitation level determining agent receptiveness and rationality) is treated as the splitting factor. Both variables determine the resulting agent opinion, which itself can be in favor of the studied position, against it, or neutral. Thanks to the flexibility of implementing communication between the agents, the model is potentially applicable in a wide range of situations. As an example of the model application, we study the dynamics of a set of agents communicating among themselves via messages. In the example, we chose the simplest, fully connected communication topology, to focus on the effects of the individual opinion dynamics, and to look for stable final distributions of agents with different emotions, information and opinions. Even for such simplified system, the model shows complex behavior, including phase transitions due to symmetry breaking by external propaganda.

17. Discrete model of opinion changes using knowledge and emotions as control variables.

Directory of Open Access Journals (Sweden)

Pawel Sobkowicz

Full Text Available We present a new model of opinion changes dependent on the agents emotional state and their information about the issue in question. Our goal is to construct a simple, yet nontrivial and flexible representation of individual attitude dynamics for agent based simulations, that could be used in a variety of social environments. The model is a discrete version of the cusp catastrophe model of opinion dynamics in which information is treated as the normal factor while emotional arousal (agitation level determining agent receptiveness and rationality is treated as the splitting factor. Both variables determine the resulting agent opinion, which itself can be in favor of the studied position, against it, or neutral. Thanks to the flexibility of implementing communication between the agents, the model is potentially applicable in a wide range of situations. As an example of the model application, we study the dynamics of a set of agents communicating among themselves via messages. In the example, we chose the simplest, fully connected communication topology, to focus on the effects of the individual opinion dynamics, and to look for stable final distributions of agents with different emotions, information and opinions. Even for such simplified system, the model shows complex behavior, including phase transitions due to symmetry breaking by external propaganda.

18. Discrete Model of Opinion Changes Using Knowledge and Emotions as Control Variables

Science.gov (United States)

Sobkowicz, Pawel

2012-01-01

We present a new model of opinion changes dependent on the agents emotional state and their information about the issue in question. Our goal is to construct a simple, yet nontrivial and flexible representation of individual attitude dynamics for agent based simulations, that could be used in a variety of social environments. The model is a discrete version of the cusp catastrophe model of opinion dynamics in which information is treated as the normal factor while emotional arousal (agitation level determining agent receptiveness and rationality) is treated as the splitting factor. Both variables determine the resulting agent opinion, which itself can be in favor of the studied position, against it, or neutral. Thanks to the flexibility of implementing communication between the agents, the model is potentially applicable in a wide range of situations. As an example of the model application, we study the dynamics of a set of agents communicating among themselves via messages. In the example, we chose the simplest, fully connected communication topology, to focus on the effects of the individual opinion dynamics, and to look for stable final distributions of agents with different emotions, information and opinions. Even for such simplified system, the model shows complex behavior, including phase transitions due to symmetry breaking by external propaganda. PMID:22984516

19. Building Better Ecological Machines: Complexity Theory and Alternative Economic Models

Directory of Open Access Journals (Sweden)

Jess Bier

2016-12-01

Full Text Available Computer models of the economy are regularly used to predict economic phenomena and set financial policy. However, the conventional macroeconomic models are currently being reimagined after they failed to foresee the current economic crisis, the outlines of which began to be understood only in 2007-2008. In this article we analyze the most prominent of this reimagining: Agent-Based models (ABMs. ABMs are an influential alternative to standard economic models, and they are one focus of complexity theory, a discipline that is a more open successor to the conventional chaos and fractal modeling of the 1990s. The modelers who create ABMs claim that their models depict markets as ecologies, and that they are more responsive than conventional models that depict markets as machines. We challenge this presentation, arguing instead that recent modeling efforts amount to the creation of models as ecological machines. Our paper aims to contribute to an understanding of the organizing metaphors of macroeconomic models, which we argue is relevant conceptually and politically, e.g., when models are used for regulatory purposes.

20. Modelling and simulation of gas explosions in complex geometries

Energy Technology Data Exchange (ETDEWEB)

Saeter, Olav

1998-12-31

This thesis presents a three-dimensional Computational Fluid Dynamics (CFD) code (EXSIM94) for modelling and simulation of gas explosions in complex geometries. It gives the theory and validates the following sub-models : (1) the flow resistance and turbulence generation model for densely packed regions, (2) the flow resistance and turbulence generation model for single objects, and (3) the quasi-laminar combustion model. It is found that a simple model for flow resistance and turbulence generation in densely packed beds is able to reproduce the medium and large scale MERGE explosion experiments of the Commission of European Communities (CEC) within a band of factor 2. The model for a single representation is found to predict explosion pressure in better agreement with the experiments with a modified k-{epsilon} model. This modification also gives a slightly improved grid independence for realistic gas explosion approaches. One laminar model is found unsuitable for gas explosion modelling because of strong grid dependence. Another laminar model is found to be relatively grid independent and to work well in harmony with the turbulent combustion model. The code is validated against 40 realistic gas explosion experiments. It is relatively grid independent in predicting explosion pressure in different offshore geometries. It can predict the influence of ignition point location, vent arrangements, different geometries, scaling effects and gas reactivity. The validation study concludes with statistical and uncertainty analyses of the code performance. 98 refs., 96 figs, 12 tabs.

1. Structure identification of an uncertain network coupled with complex-variable chaotic systems via adaptive impulsive control

International Nuclear Information System (INIS)

Liu Dan-Feng; Wu Zhao-Yan; Ye Qing-Ling

2014-01-01

In this paper, structure identification of an uncertain network coupled with complex-variable chaotic systems is investigated. Both the topological structure and the system parameters can be unknown and need to be identified. Based on impulsive stability theory and the Lyapunov function method, an impulsive control scheme combined with an adaptive strategy is adopted to design effective and universal network estimators. The restriction on the impulsive interval is relaxed by adopting an adaptive strategy. Further, the proposed method can monitor the online switching topology effectively. Several numerical simulations are provided to illustrate the effectiveness of the theoretical results. (general)

2. Surface complexation modeling of Cu(II adsorption on mixtures of hydrous ferric oxide and kaolinite

Directory of Open Access Journals (Sweden)

Schaller Melinda S

2008-09-01

Full Text Available Abstract Background The application of surface complexation models (SCMs to natural sediments and soils is hindered by a lack of consistent models and data for large suites of metals and minerals of interest. Furthermore, the surface complexation approach has mostly been developed and tested for single solid systems. Few studies have extended the SCM approach to systems containing multiple solids. Results Cu adsorption was measured on pure hydrous ferric oxide (HFO, pure kaolinite (from two sources and in systems containing mixtures of HFO and kaolinite over a wide range of pH, ionic strength, sorbate/sorbent ratios and, for the mixed solid systems, using a range of kaolinite/HFO ratios. Cu adsorption data measured for the HFO and kaolinite systems was used to derive diffuse layer surface complexation models (DLMs describing Cu adsorption. Cu adsorption on HFO is reasonably well described using a 1-site or 2-site DLM. Adsorption of Cu on kaolinite could be described using a simple 1-site DLM with formation of a monodentate Cu complex on a variable charge surface site. However, for consistency with models derived for weaker sorbing cations, a 2-site DLM with a variable charge and a permanent charge site was also developed. Conclusion Component additivity predictions of speciation in mixed mineral systems based on DLM parameters derived for the pure mineral systems were in good agreement with measured data. Discrepancies between the model predictions and measured data were similar to those observed for the calibrated pure mineral systems. The results suggest that quantifying specific interactions between HFO and kaolinite in speciation models may not be necessary. However, before the component additivity approach can be applied to natural sediments and soils, the effects of aging must be further studied and methods must be developed to estimate reactive surface areas of solid constituents in natural samples.

3. Bridging Mechanistic and Phenomenological Models of Complex Biological Systems.

Science.gov (United States)

Transtrum, Mark K; Qiu, Peng

2016-05-01

The inherent complexity of biological systems gives rise to complicated mechanistic models with a large number of parameters. On the other hand, the collective behavior of these systems can often be characterized by a relatively small number of phenomenological parameters. We use the Manifold Boundary Approximation Method (MBAM) as a tool for deriving simple phenomenological models from complicated mechanistic models. The resulting models are not black boxes, but remain expressed in terms of the microscopic parameters. In this way, we explicitly connect the macroscopic and microscopic descriptions, characterize the equivalence class of distinct systems exhibiting the same range of collective behavior, and identify the combinations of components that function as tunable control knobs for the behavior. We demonstrate the procedure for adaptation behavior exhibited by the EGFR pathway. From a 48 parameter mechanistic model, the system can be effectively described by a single adaptation parameter τ characterizing the ratio of time scales for the initial response and recovery time of the system which can in turn be expressed as a combination of microscopic reaction rates, Michaelis-Menten constants, and biochemical concentrations. The situation is not unlike modeling in physics in which microscopically complex processes can often be renormalized into simple phenomenological models with only a few effective parameters. The proposed method additionally provides a mechanistic explanation for non-universal features of the behavior.

4. Low-Complexity Variable Frame Rate Analysis for Speech Recognition and Voice Activity Detection

DEFF Research Database (Denmark)

Tan, Zheng-Hua; Lindberg, Børge

2010-01-01

present a low-complexity and effective frame selection approach based on a posteriori signal-to-noise ratio (SNR) weighted energy distance: The use of an energy distance, instead of e.g. a standard cepstral distance, makes the approach computationally efficient and enables fine granularity search......Frame based speech processing inherently assumes a stationary behavior of speech signals in a short period of time. Over a long time, the characteristics of the signals can change significantly and frames are not equally important, underscoring the need for frame selection. In this paper, we......, and the use of a posteriori SNR weighting emphasizes the reliable regions in noisy speech signals. It is experimentally found that the approach is able to assign a higher frame rate to fast changing events such as consonants, a lower frame rate to steady regions like vowels and no frames to silence, even...

5. Mathematical modelling of complex contagion on clustered networks

Science.gov (United States)

O'sullivan, David J.; O'Keeffe, Gary; Fennell, Peter; Gleeson, James

2015-09-01

The spreading of behavior, such as the adoption of a new innovation, is influenced bythe structure of social networks that interconnect the population. In the experiments of Centola (Science, 2010), adoption of new behavior was shown to spread further and faster across clustered-lattice networks than across corresponding random networks. This implies that the “complex contagion” effects of social reinforcement are important in such diffusion, in contrast to “simple” contagion models of disease-spread which predict that epidemics would grow more efficiently on random networks than on clustered networks. To accurately model complex contagion on clustered networks remains a challenge because the usual assumptions (e.g. of mean-field theory) regarding tree-like networks are invalidated by the presence of triangles in the network; the triangles are, however, crucial to the social reinforcement mechanism, which posits an increased probability of a person adopting behavior that has been adopted by two or more neighbors. In this paper we modify the analytical approach that was introduced by Hebert-Dufresne et al. (Phys. Rev. E, 2010), to study disease-spread on clustered networks. We show how the approximation method can be adapted to a complex contagion model, and confirm the accuracy of the method with numerical simulations. The analytical results of the model enable us to quantify the level of social reinforcement that is required to observe—as in Centola’s experiments—faster diffusion on clustered topologies than on random networks.

6. Mathematical modelling of complex contagion on clustered networks

Directory of Open Access Journals (Sweden)

David J. P. O'Sullivan

2015-09-01

Full Text Available The spreading of behavior, such as the adoption of a new innovation, is influenced bythe structure of social networks that interconnect the population. In the experiments of Centola (Science, 2010, adoption of new behavior was shown to spread further and faster across clustered-lattice networks than across corresponding random networks. This implies that the complex contagion effects of social reinforcement are important in such diffusion, in contrast to simple contagion models of disease-spread which predict that epidemics would grow more efficiently on random networks than on clustered networks. To accurately model complex contagion on clustered networks remains a challenge because the usual assumptions (e.g. of mean-field theory regarding tree-like networks are invalidated by the presence of triangles in the network; the triangles are, however, crucial to the social reinforcement mechanism, which posits an increased probability of a person adopting behavior that has been adopted by two or more neighbors. In this paper we modify the analytical approach that was introduced by Hebert-Dufresne et al. (Phys. Rev. E, 2010, to study disease-spread on clustered networks. We show how the approximation method can be adapted to a complex contagion model, and confirm the accuracy of the method with numerical simulations. The analytical results of the model enable us to quantify the level of social reinforcement that is required to observe—as in Centola’s experiments—faster diffusion on clustered topologies than on random networks.

7. Randomization-Based Inference about Latent Variables from Complex Samples: The Case of Two-Stage Sampling

Science.gov (United States)

Li, Tiandong

2012-01-01

In large-scale assessments, such as the National Assessment of Educational Progress (NAEP), plausible values based on Multiple Imputations (MI) have been used to estimate population characteristics for latent constructs under complex sample designs. Mislevy (1991) derived a closed-form analytic solution for a fixed-effect model in creating…

8. A Sensitivity Analysis Method to Study the Behavior of Complex Process-based Models

Science.gov (United States)

Brugnach, M.; Neilson, R.; Bolte, J.

2001-12-01

The use of process-based models as a tool for scientific inquiry is becoming increasingly relevant in ecosystem studies. Process-based models are artificial constructs that simulate the system by mechanistically mimicking the functioning of its component processes. Structurally, a process-based model can be characterized, in terms of its processes and the relationships established among them. Each process comprises a set of functional relationships among several model components (e.g., state variables, parameters and input data). While not encoded explicitly, the dynamics of the model emerge from this set of components and interactions organized in terms of processes. It is the task of the modeler to guarantee that the dynamics generated are appropriate and semantically equivalent to the phenomena being modeled. Despite the availability of techniques to characterize and understand model behavior, they do not suffice to completely and easily understand how a complex process-based model operates. For example, sensitivity analysis studies model behavior by determining the rate of change in model output as parameters or input data are varied. One of the problems with this approach is that it considers the model as a "black box", and it focuses on explaining model behavior by analyzing the relationship input-output. Since, these models have a high degree of non-linearity, understanding how the input affects an output can be an extremely difficult task. Operationally, the application of this technique may constitute a challenging task because complex process-based models are generally characterized by a large parameter space. In order to overcome some of these difficulties, we propose a method of sensitivity analysis to be applicable to complex process-based models. This method focuses sensitivity analysis at the process level, and it aims to determine how sensitive the model output is to variations in the processes. Once the processes that exert the major influence in

9. Controls on the spatial variability of key soil properties: comparing field data with a mechanistic soilscape evolution model

Science.gov (United States)

Vanwalleghem, T.; Román, A.; Giraldez, J. V.

2016-12-01

There is a need for better understanding the processes influencing soil formation and the resulting distribution of soil properties. Soil properties can exhibit strong spatial variation, even at the small catchment scale. Especially soil carbon pools in semi-arid, mountainous areas are highly uncertain because bulk density and stoniness are very heterogeneous and rarely measured explicitly. In this study, we explore the spatial variability in key soil properties (soil carbon stocks, stoniness, bulk density and soil depth) as a function of processes shaping the critical zone (weathering, erosion, soil water fluxes and vegetation patterns). We also compare the potential of a geostatistical versus a mechanistic soil formation model (MILESD) for predicting these key soil properties. Soil core samples were collected from 67 locations at 6 depths. Total soil organic carbon stocks were 4.38 kg m-2. Solar radiation proved to be the key variable controlling soil carbon distribution. Stone content was mostly controlled by slope, indicating the importance of erosion. Spatial distribution of bulk density was found to be highly random. Finally, total carbon stocks were predicted using a random forest model whose main covariates were solar radiation and NDVI. The model predicts carbon stocks that are double as high on north versus south-facing slopes. However, validation showed that these covariates only explained 25% of the variation in the dataset. Apparently, present-day landscape and vegetation properties are not sufficient to fully explain variability in the soil carbon stocks in this complex terrain under natural vegetation. This is attributed to a high spatial variability in bulk density and stoniness, key variables controlling carbon stocks. Similar results were obtained with the mechanistic soil formation model MILESD, suggesting that more complex models might be needed to further explore this high spatial variability.

10. The semiotics of control and modeling relations in complex systems.

Science.gov (United States)

Joslyn, C

2001-01-01

We provide a conceptual analysis of ideas and principles from the systems theory discourse which underlie Pattee's semantic or semiotic closure, which is itself foundational for a school of theoretical biology derived from systems theory and cybernetics, and is now being related to biological semiotics and explicated in the relational biological school of Rashevsky and Rosen. Atomic control systems and models are described as the canonical forms of semiotic organization, sharing measurement relations, but differing topologically in that control systems are circularly and models linearly related to their environments. Computation in control systems is introduced, motivating hierarchical decomposition, hybrid modeling and control systems, and anticipatory or model-based control. The semiotic relations in complex control systems are described in terms of relational constraints, and rules and laws are distinguished as contingent and necessary functional entailments, respectively. Finally, selection as a meta-level of constraint is introduced as the necessary condition for semantic relations in control systems and models.

11. Predicting the future completing models of observed complex systems

CERN Document Server

Abarbanel, Henry

2013-01-01

Predicting the Future: Completing Models of Observed Complex Systems provides a general framework for the discussion of model building and validation across a broad spectrum of disciplines. This is accomplished through the development of an exact path integral for use in transferring information from observations to a model of the observed system. Through many illustrative examples drawn from models in neuroscience, fluid dynamics, geosciences, and nonlinear electrical circuits, the concepts are exemplified in detail. Practical numerical methods for approximate evaluations of the path integral are explored, and their use in designing experiments and determining a model's consistency with observations is investigated. Using highly instructive examples, the problems of data assimilation and the means to treat them are clearly illustrated. This book will be useful for students and practitioners of physics, neuroscience, regulatory networks, meteorology and climate science, network dynamics, fluid dynamics, and o...

12. A Latent-Variable Causal Model of Faculty Reputational Ratings.

Science.gov (United States)

King, Suzanne; Wolfle, Lee M.

A reanalysis was conducted of Saunier's research (1985) on sources of variation in the National Research Council (NRC) reputational ratings of university faculty. Saunier conducted a stepwise regression analysis using 12 predictor variables. Due to problems with multicollinearity and because of the atheoretical nature of stepwise regression,…

13. Instrumental variables estimation under a structural Cox model

DEFF Research Database (Denmark)

Martinussen, Torben; Nørbo Sørensen, Ditte; Vansteelandt, Stijn

2017-01-01

Instrumental variable (IV) analysis is an increasingly popular tool for inferring the effect of an exposure on an outcome, as witnessed by the growing number of IV applications in epidemiology, for instance. The majority of IV analyses of time-to-event endpoints are, however, dominated by heurist...

14. Variability of four-dimensional computed tomography patient models

NARCIS (Netherlands)

Sonke, Jan-Jakob; Lebesque, Joos; van Herk, Marcel

2008-01-01

PURPOSE: To quantify the interfractional variability in lung tumor trajectory and mean position during the course of radiation therapy. METHODS AND MATERIALS: Repeat four-dimensional (4D) cone-beam computed tomography (CBCT) scans (median, nine scans/patient) routinely acquired during the course of

15. An Ontology for Modeling Complex Inter-relational Organizations

Science.gov (United States)

Wautelet, Yves; Neysen, Nicolas; Kolp, Manuel

This paper presents an ontology for organizational modeling through multiple complementary aspects. The primary goal of the ontology is to dispose of an adequate set of related concepts for studying complex organizations involved in a lot of relationships at the same time. In this paper, we define complex organizations as networked organizations involved in a market eco-system that are playing several roles simultaneously. In such a context, traditional approaches focus on the macro analytic level of transactions; this is supplemented here with a micro analytic study of the actors' rationale. At first, the paper overviews enterprise ontologies literature to position our proposal and exposes its contributions and limitations. The ontology is then brought to an advanced level of formalization: a meta-model in the form of a UML class diagram allows to overview the ontology concepts and their relationships which are formally defined. Finally, the paper presents the case study on which the ontology has been validated.

16. Application of a user-friendly comprehensive circulatory model for estimation of hemodynamic and ventricular variables

NARCIS (Netherlands)

Ferrari, G.; Kozarski, M.; Gu, Y. J.; De Lazzari, C.; Di Molfetta, A.; Palko, K. J.; Zielinski, K.; Gorczynska, K.; Darowski, M.; Rakhorst, G.

2008-01-01

Purpose: Application of a comprehensive, user-friendly, digital computer circulatory model to estimate hemodynamic and ventricular variables. Methods: The closed-loop lumped parameter circulatory model represents the circulation at the level of large vessels. A variable elastance model reproduces

17. Simulating Salt Movement using a Coupled Salinity Transport Model in a Variably Saturated Agricultural Groundwater System

Science.gov (United States)

Tavakoli Kivi, S.; Bailey, R. T.; Gates, T. K.

2017-12-01

Salinization is one of the major concerns in irrigated agricultural fields. Increasing salinity concentrations are due principally to a high water table that results from excessive irrigation, canal seepage, and a lack of efficient drainage systems, and lead to decreasing crop yield. High groundwater salinity loading to nearby river systems also impacts downstream areas, with saline river water diverted for application on irrigated fields. To assess the different strategies for salt remediation, we present a reactive transport model (UZF-RT3D) coupled with a salinity equilibrium chemistry module for simulating the fate and transport of salt ions in a variably-saturated agricultural groundwater system. The developed model accounts not for advection, dispersion, nitrogen and sulfur cycling, oxidation-reduction, sorption, complexation, ion exchange, and precipitation/dissolution of salt minerals. The model is applied to a 500 km2 region within the Lower Arkansas River Valley (LARV) in southeastern Colorado, an area acutely affected by salinization in the past few decades. The model is tested against salt ion concentrations in the saturated zone, total dissolved solid concentrations in the unsaturated zone, and salt groundwater loading to the Arkansas River. The model now can be used to investigate salinity remediation strategies.

18. Variable Renewable Energy in Long-Term Planning Models: A Multi-Model Perspective

Energy Technology Data Exchange (ETDEWEB)

Cole, Wesley J. [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Frew, Bethany A. [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Mai, Trieu T. [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Sun, Yinong [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Bistline, John [Electric Power Research Inst., Palo Alto, CA (United States); Blanford, Geoffrey [Electric Power Research Inst., Palo Alto, CA (United States); Young, David [Electric Power Research Inst., Palo Alto, CA (United States); Marcy, Cara [Energy Information Administration, Washington, DC (United States); Namovicz, Chris [Energy Information Administration, Washington, DC (United States); Edelman, Risa [Environmental Protection Agency, Washington, DC (United States); Meroney, Bill [Environmental Protection Agency; Sims, Ryan [Environmental Protection Agency; Stenhouse, Jeb [Environmental Protection Agency; Donohoo-Vallett, Paul [U.S. Department of Energy

2017-11-03

Long-term capacity expansion models of the U.S. electricity sector have long been used to inform electric sector stakeholders and decision makers. With the recent surge in variable renewable energy (VRE) generators - primarily wind and solar photovoltaics - the need to appropriately represent VRE generators in these long-term models has increased. VRE generators are especially difficult to represent for a variety of reasons, including their variability, uncertainty, and spatial diversity. To assess current best practices, share methods and data, and identify future research needs for VRE representation in capacity expansion models, four capacity expansion modeling teams from the Electric Power Research Institute, the U.S. Energy Information Administration, the U.S. Environmental Protection Agency, and the National Renewable Energy Laboratory conducted two workshops of VRE modeling for national-scale capacity expansion models. The workshops covered a wide range of VRE topics, including transmission and VRE resource data, VRE capacity value, dispatch and operational modeling, distributed generation, and temporal and spatial resolution. The objectives of the workshops were both to better understand these topics and to improve the representation of VRE across the suite of models. Given these goals, each team incorporated model updates and performed additional analyses between the first and second workshops. This report summarizes the analyses and model 'experiments' that were conducted as part of these workshops as well as the various methods for treating VRE among the four modeling teams. The report also reviews the findings and learnings from the two workshops. We emphasize the areas where there is still need for additional research and development on analysis tools to incorporate VRE into long-term planning and decision-making.

19. A computational framework for modeling targets as complex adaptive systems

Science.gov (United States)

Santos, Eugene; Santos, Eunice E.; Korah, John; Murugappan, Vairavan; Subramanian, Suresh

2017-05-01

Modeling large military targets is a challenge as they can be complex systems encompassing myriad combinations of human, technological, and social elements that interact, leading to complex behaviors. Moreover, such targets have multiple components and structures, extending across multiple spatial and temporal scales, and are in a state of change, either in response to events in the environment or changes within the system. Complex adaptive system (CAS) theory can help in capturing the dynamism, interactions, and more importantly various emergent behaviors, displayed by the targets. However, a key stumbling block is incorporating information from various intelligence, surveillance and reconnaissance (ISR) sources, while dealing with the inherent uncertainty, incompleteness and time criticality of real world information. To overcome these challenges, we present a probabilistic reasoning network based framework called complex adaptive Bayesian Knowledge Base (caBKB). caBKB is a rigorous, overarching and axiomatic framework that models two key processes, namely information aggregation and information composition. While information aggregation deals with the union, merger and concatenation of information and takes into account issues such as source reliability and information inconsistencies, information composition focuses on combining information components where such components may have well defined operations. Since caBKBs can explicitly model the relationships between information pieces at various scales, it provides unique capabilities such as the ability to de-aggregate and de-compose information for detailed analysis. Using a scenario from the Network Centric Operations (NCO) domain, we will describe how our framework can be used for modeling targets with a focus on methodologies for quantifying NCO performance metrics.

20. Fundamentals of complex networks models, structures and dynamics

CERN Document Server

Chen, Guanrong; Li, Xiang

2014-01-01

Complex networks such as the Internet, WWW, transportationnetworks, power grids, biological neural networks, and scientificcooperation networks of all kinds provide challenges for futuretechnological development. In particular, advanced societies havebecome dependent on large infrastructural networks to an extentbeyond our capability to plan (modeling) and to operate (control).The recent spate of collapses in power grids and ongoing virusattacks on the Internet illustrate the need for knowledge aboutmodeling, analysis of behaviors, optimized planning and performancecontrol in such networks. F

1. Model Complexities of Shallow Networks Representing Highly Varying Functions

Czech Academy of Sciences Publication Activity Database

Kůrková, Věra; Sanguineti, M.

2016-01-01

Roč. 171, 1 January (2016), s. 598-604 ISSN 0925-2312 R&D Projects: GA MŠk(CZ) LD13002 Grant - others:grant for Visiting Professors(IT) GNAMPA-INdAM Institutional support: RVO:67985807 Keywords : shallow networks * model complexity * highly varying functions * Chernoff bound * perceptrons * Gaussian kernel units Subject RIV: IN - Informatics, Computer Science Impact factor: 3.317, year: 2016

2. What model resolution is required in climatological downscaling over complex terrain?

Science.gov (United States)

El-Samra, Renalda; Bou-Zeid, Elie; El-Fadel, Mutasem

2018-05-01

This study presents results from the Weather Research and Forecasting (WRF) model applied for climatological downscaling simulations over highly complex terrain along the Eastern Mediterranean. We sequentially downscale general circulation model results, for a mild and wet year (2003) and a hot and dry year (2010), to three local horizontal resolutions of 9, 3 and 1 km. Simulated near-surface hydrometeorological variables are compared at different time scales against data from an observational network over the study area comprising rain gauges, anemometers, and thermometers. The overall performance of WRF at 1 and 3 km horizontal resolution was satisfactory, with significant improvement over the 9 km downscaling simulation. The total yearly precipitation from WRF's 1 km and 3 km domains exhibited quantitative measure of the potential errors for various hydrometeorological variables.

3. Modelling and Multi-Variable Control of Refrigeration Systems

DEFF Research Database (Denmark)

Larsen, Lars Finn Slot; Holm, J. R.

2003-01-01

In this paper a dynamic model of a 1:1 refrigeration system is presented. The main modelling effort has been concentrated on a lumped parameter model of a shell and tube condenser. The model has shown good resemblance with experimental data from a test rig, regarding as well the static as the dyn......In this paper a dynamic model of a 1:1 refrigeration system is presented. The main modelling effort has been concentrated on a lumped parameter model of a shell and tube condenser. The model has shown good resemblance with experimental data from a test rig, regarding as well the static...... as the dynamic behavior. Based on this model the effects of the cross couplings has been examined. The influence of the cross couplings on the achievable control performance has been investigated. A MIMO controller is designed and the performance is compared with the control performance achieved by using...

4. Complexity and agent-based modelling in urban research

DEFF Research Database (Denmark)

Fertner, Christian

influence on the bigger system. Traditional scientific methods or theories often tried to simplify, not accounting complex relations of actors and decision-making. The introduction of computers in simulation made new approaches in modelling, as for example agent-based modelling (ABM), possible, dealing......Urbanisation processes are results of a broad variety of actors or actor groups and their behaviour and decisions based on different experiences, knowledge, resources, values etc. The decisions done are often on a micro/individual level but resulting in macro/collective behaviour. In urban research...

5. Methodology and Results of Mathematical Modelling of Complex Technological Processes

Science.gov (United States)

Mokrova, Nataliya V.

2018-03-01

The methodology of system analysis allows us to draw a mathematical model of the complex technological process. The mathematical description of the plasma-chemical process was proposed. The importance the quenching rate and initial temperature decrease time was confirmed for producing the maximum amount of the target product. The results of numerical integration of the system of differential equations can be used to describe reagent concentrations, plasma jet rate and temperature in order to achieve optimal mode of hardening. Such models are applicable both for solving control problems and predicting future states of sophisticated technological systems.

6. The complex sine-Gordon model on a half line

International Nuclear Information System (INIS)

Tzamtzis, Georgios

2003-01-01

In this thesis, we study the complex sine-Gordon model on a half line. The model in the bulk is an integrable (1+1) dimensional field theory which is U(1) gauge invariant and comprises a generalisation of the sine-Gordon theory. It accepts soliton and breather solutions. By introducing suitably selected boundary conditions we may consider the model on a half line. Through such conditions the model can be shown to remain integrable and various aspects of the boundary theory can be examined. The first chapter serves as a brief introduction to some basic concepts of integrability and soliton solutions. As an example of an integrable system with soliton solutions, the sine-Gordon model is presented both in the bulk and on a half line. These results will serve as a useful guide for the model at hand. The introduction finishes with a brief overview of the two methods that will be used on the fourth chapter in order to obtain the quantum spectrum of the boundary complex sine-Gordon model. In the second chapter the model is properly introduced along with a brief literature review. Different realisations of the model and their connexions are discussed. The vacuum of the theory is investigated. Soliton solutions are given and a discussion on the existence of breathers follows. Finally the collapse of breather solutions to single solitons is demonstrated and the chapter concludes with a different approach to the breather problem. In the third chapter, we construct the lowest conserved currents and through them we find suitable boundary conditions that allow for their conservation in the presence of a boundary. The boundary term is added to the Lagrangian and the vacuum is reexamined in the half line case. The reflection process of solitons from the boundary is studied and the time-delay is calculated. Finally we address the existence of boundary-bound states. In the fourth chapter we study the quantum complex sine-Gordon model. We begin with a brief overview of the theory in

7. Understanding variability of the Southern Ocean overturning circulation in CORE-II models

Science.gov (United States)

Downes, S. M.; Spence, P.; Hogg, A. M.

2018-03-01

The current generation of climate models exhibit a large spread in the steady-state and projected Southern Ocean upper and lower overturning circulation, with mechanisms for deep ocean variability remaining less well understood. Here, common Southern Ocean metrics in twelve models from the Coordinated Ocean-ice Reference Experiment Phase II (CORE-II) are assessed over a 60 year period. Specifically, stratification, surface buoyancy fluxes, and eddies are linked to the magnitude of the strengthening trend in the upper overturning circulation, and a decreasing trend in the lower overturning circulation across the CORE-II models. The models evolve similarly in the upper 1 km and the deep ocean, with an almost equivalent poleward intensification trend in the Southern Hemisphere westerly winds. However, the models differ substantially in their eddy parameterisation and surface buoyancy fluxes. In general, models with a larger heat-driven water mass transformation where deep waters upwell at the surface ( ∼ 55°S) transport warmer waters into intermediate depths, thus weakening the stratification in the upper 2 km. Models with a weak eddy induced overturning and a warm bias in the intermediate waters are more likely to exhibit larger increases in the upper overturning circulation, and more significant weakening of the lower overturning circulation. We find the opposite holds for a cool model bias in intermediate depths, combined with a more complex 3D eddy parameterisation that acts to reduce isopycnal slope. In summary, the Southern Ocean overturning circulation decadal trends in the coarse resolution CORE-II models are governed by biases in surface buoyancy fluxes and the ocean density field, and the configuration of the eddy parameterisation.

8. Extending a configuration model to find communities in complex networks

International Nuclear Information System (INIS)

Jin, Di; Hu, Qinghua; He, Dongxiao; Yang, Bo; Baquero, Carlos

2013-01-01

Discovery of communities in complex networks is a fundamental data analysis task in various domains. Generative models are a promising class of techniques for identifying modular properties from networks, which has been actively discussed recently. However, most of them cannot preserve the degree sequence of networks, which will distort the community detection results. Rather than using a blockmodel as most current works do, here we generalize a configuration model, namely, a null model of modularity, to solve this problem. Towards decomposing and combining sub-graphs according to the soft community memberships, our model incorporates the ability to describe community structures, something the original model does not have. Also, it has the property, as with the original model, that it fixes the expected degree sequence to be the same as that of the observed network. We combine both the community property and degree sequence preserving into a single unified model, which gives better community results compared with other models. Thereafter, we learn the model using a technique of nonnegative matrix factorization and determine the number of communities by applying consensus clustering. We test this approach both on synthetic benchmarks and on real-world networks, and compare it with two similar methods. The experimental results demonstrate the superior performance of our method over competing methods in detecting both disjoint and overlapping communities. (paper)

9. A Novel Approach to model EPIC variable background

Science.gov (United States)

Marelli, M.; De Luca, A.; Salvetti, D.; Belfiore, A.

2017-10-01

One of the main aim of the EXTraS (Exploring the X-ray Transient and variable Sky) project is to characterise the variability of serendipitous XMM-Newton sources within each single observation. Unfortunately, 164 Ms out of the 774 Ms of cumulative exposure considered (21%) are badly affected by soft proton flares, hampering any classical analysis of field sources. De facto, the latest releases of the 3XMM catalog, as well as most of the analysis in literature, simply exclude these 'high background' periods from analysis. We implemented a novel SAS-indipendent approach to produce background-subtracted light curves, which allows to treat the case of very faint sources and very bright proton flares. EXTraS light curves of 3XMM-DR5 sources will be soon released to the community, together with new tools we are developing.

10. QRS complex detection based on continuous density hidden Markov models using univariate observations

Science.gov (United States)

Sotelo, S.; Arenas, W.; Altuve, M.

2018-04-01

In the electrocardiogram (ECG), the detection of QRS complexes is a fundamental step in the ECG signal processing chain since it allows the determination of other characteristics waves of the ECG and provides information about heart rate variability. In this work, an automatic QRS complex detector based on continuous density hidden Markov models (HMM) is proposed. HMM were trained using univariate observation sequences taken either from QRS complexes or their derivatives. The detection approach is based on the log-likelihood comparison of the observation sequence with a fixed threshold. A sliding window was used to obtain the observation sequence to be evaluated by the model. The threshold was optimized by receiver operating characteristic curves. Sensitivity (Sen), specificity (Spc) and F1 score were used to evaluate the detection performance. The approach was validated using ECG recordings from the MIT-BIH Arrhythmia database. A 6-fold cross-validation shows that the best detection performance was achieved with 2 states HMM trained with QRS complexes sequences (Sen = 0.668, Spc = 0.360 and F1 = 0.309). We concluded that these univariate sequences provide enough information to characterize the QRS complex dynamics from HMM. Future works are directed to the use of multivariate observations to increase the detection performance.

11. Research on the Complexity of Dual-Channel Supply Chain Model in Competitive Retailing Service Market

Science.gov (United States)

Ma, Junhai; Li, Ting; Ren, Wenbo

2017-06-01

This paper examines the optimal decisions of dual-channel game model considering the inputs of retailing service. We analyze how adjustment speed of service inputs affect the system complexity and market performance, and explore the stability of the equilibrium points by parameter basin diagrams. And chaos control is realized by variable feedback method. The numerical simulation shows that complex behavior would trigger the system to become unstable, such as double period bifurcation and chaos. We measure the performances of the model in different periods by analyzing the variation of average profit index. The theoretical results show that the percentage share of the demand and cross-service coefficients have important influence on the stability of the system and its feasible basin of attraction.

12. Impacts of correcting the inter-variable correlation of climate model outputs on hydrological modeling

Science.gov (United States)

Chen, Jie; Li, Chao; Brissette, François P.; Chen, Hua; Wang, Mingna; Essou, Gilles R. C.

2018-05-01

Bias correction is usually implemented prior to using climate model outputs for impact studies. However, bias correction methods that are commonly used treat climate variables independently and often ignore inter-variable dependencies. The effects of ignoring such dependencies on impact studies need to be investigated. This study aims to assess the impacts of correcting the inter-variable correlation of climate model outputs on hydrological modeling. To this end, a joint bias correction (JBC) method which corrects the joint distribution of two variables as a whole is compared with an independent bias correction (IBC) method; this is considered in terms of correcting simulations of precipitation and temperature from 26 climate models for hydrological modeling over 12 watersheds located in various climate regimes. The results show that the simulated precipitation and temperature are considerably biased not only in the individual distributions, but also in their correlations, which in turn result in biased hydrological simulations. In addition to reducing the biases of the individual characteristics of precipitation and temperature, the JBC method can also reduce the bias in precipitation-temperature (P-T) correlations. In terms of hydrological modeling, the JBC method performs significantly better than the IBC method for 11 out of the 12 watersheds over the calibration period. For the validation period, the advantages of the JBC method are greatly reduced as the performance becomes dependent on the watershed, GCM and hydrological metric considered. For arid/tropical and snowfall-rainfall-mixed watersheds, JBC performs better than IBC. For snowfall- or rainfall-dominated watersheds, however, the two methods behave similarly, with IBC performing somewhat better than JBC. Overall, the results emphasize the advantages of correcting the P-T correlation when using climate model-simulated precipitation and temperature to assess the impact of climate change on watershed

13. Using model complexes to augment and advance metalloproteinase inhibitor design.

Science.gov (United States)

Jacobsen, Faith E; Cohen, Seth M

2004-05-17

The tetrahedral zinc complex [(Tp(Ph,Me))ZnOH] (Tp(Ph,Me) = hydrotris(3,5-phenylmethylpyrazolyl)borate) was combined with 2-thenylmercaptan, ethyl 4,4,4-trifluoroacetoacetate, salicylic acid, salicylamide, thiosalicylic acid, thiosalicylamide, methyl salicylate, methyl thiosalicyliate, and 2-hydroxyacetophenone to form the corresponding [(Tp(Ph,Me))Zn(ZBG)] complexes (ZBG = zinc-binding group). X-ray crystal structures of these complexes were obtained to determine the mode of binding for each ZBG, several of which had been previously studied with SAR by NMR (structure-activity relationship by nuclear magnetic resonance) as potential ligands for use in matrix metalloproteinase inhibitors. The [(Tp(Ph,Me))Zn(ZBG)] complexes show that hydrogen bonding and donor atom acidity have a pronounced effect on the mode of binding for this series of ligands. The results of these studies give valuable insight into how ligand protonation state and intramolecular hydrogen bonds can influence the coordination mode of metal-binding proteinase inhibitors. The findings here suggest that model-based approaches can be used to augment drug discovery methods applied to metalloproteins and can aid second-generation drug design.

14. User manual of the multicompenent variably - saturated flow and transport model HP1

International Nuclear Information System (INIS)

Jacques, D.; Simunek, J.

2005-06-01

This report describes a new comprehensive simulation tool HP1 (HYDRUS1D-PHREEQC) that was obtained by coupling the HYDRUS-1D one-dimensional variably-saturated water flow and solute transport model with the PHREEQC geochemical code. The HP1 code incorporates modules simulating (1) transient water flow in variably-saturated media, (2) transport of multiple components, and (3) mixed equilibrium/kinetic geochemical reactions. The program numerically solves the Richards equation for variably-saturated water flow and advection-dispersion type equations for heat and solute transport. The flow equation incorporates a sink term to account for water uptake by plant roots. The heat transport equation considers transport due to conduction and convection with flowing water. The solute transport equations consider advective-dispersive transport in the liquid phase. The program can simulate a broad range of low-temperature biogeochemical reactions in water, soil and ground water systems including interactions with minerals, gases, exchangers, and sorption surfaces, based on thermodynamic equilibrium, kinetics, or mixed equilibrium-kinetic reactions. The program may be used to analyze water and solute movement in unsaturated, partially saturated, or fully saturated porous media. The flow region may be composed of nonuniform soils or sediments. Flow and transport can occur in the vertical, horizontal, or a generally inclined direction. The water flow part of the model can deal with prescribed head and flux boundaries, boundaries controlled by atmospheric conditions, as well as free drainage boundary conditions. The governing flow and transport equations were solved numerically using Galerkin-type linear finite element schemes. To test the accuracy of the coupling procedures implemented in HP1, simulation results were compared with (i) HYDRUS-1D for transport problems of multiple components subject to sequential first-order decay, (ii) PHREEQC for steady-state flow conditions, and

15. Complex optical/UV and X-ray variability of the Seyfert 1 galaxy 1H 0419-577

Science.gov (United States)

Pal, Main; Dewangan, Gulab C.; Kembhavi, Ajit K.; Misra, Ranjeev; Naik, Sachindra

2018-01-01

We present detailed broad-band UV/optical to X-ray spectral variability of the Seyfert 1 galaxy 1H 0419-577 using six XMM-Newton observations performed during 2002-2003. These observations covered a large amplitude variability event in which the soft X-ray (0.3-2 keV) count rate increased by a factor of ∼4 in six months. The X-ray spectra during the variability are well described by a model consisting of a primary power law, blurred and distant reflection. The 2-10 keV power-law flux varied by a factor of ∼7 while the 0.3-2 keV soft X-ray excess flux derived from the blurred reflection component varied only by a factor of ∼2. The variability event was also observed in the optical and UV bands but the variability amplitudes were only at the 6-10 per cent level. The variations in the optical and UV bands appear to follow the variations in the X-ray band. During the rising phase, the optical bands appear to lag behind the UV band but during the declining phase, the optical bands appear to lead the UV band. Such behaviour is not expected in the reprocessing models where the optical/UV emission is the result of reprocessing of X-ray emission in the accretion disc. The delayed contribution of the broad emission lines in the UV band or the changes in the accretion disc/corona geometry combined with X-ray reprocessing may give rise to the observed behaviour of the variations.

16. Capturing complexity in work disability research: application of system dynamics modeling methodology.

Science.gov (United States)

Jetha, Arif; Pransky, Glenn; Hettinger, Lawrence J

2016-01-01

Work disability (WD) is characterized by variable and occasionally undesirable outcomes. The underlying determinants of WD outcomes include patterns of dynamic relationships among health, personal, organizational and regulatory factors that have been challenging to characterize, and inadequately represented by contemporary WD models. System dynamics modeling (SDM) methodology applies a sociotechnical systems thinking lens to view WD systems as comprising a range of influential factors linked by feedback relationships. SDM can potentially overcome limitations in contemporary WD models by uncovering causal feedback relationships, and conceptualizing dynamic system behaviors. It employs a collaborative and stakeholder-based model building methodology to create a visual depiction of the system as a whole. SDM can also enable researchers to run dynamic simulations to provide evidence of anticipated or unanticipated outcomes that could result from policy and programmatic intervention. SDM may advance rehabilitation research by providing greater insights into the structure and dynamics of WD systems while helping to understand inherent complexity. Challenges related to data availability, determining validity, and the extensive time and technical skill requirements for model building may limit SDM's use in the field and should be considered. Contemporary work disability (WD) models provide limited insight into complexity associated with WD processes. System dynamics modeling (SDM) has the potential to capture complexity through a stakeholder-based approach that generates a simulation model consisting of multiple feedback loops. SDM may enable WD researchers and practitioners to understand the structure and behavior of the WD system as a whole, and inform development of improved strategies to manage straightforward and complex WD cases.

17. Semiotic aspects of control and modeling relations in complex systems

Energy Technology Data Exchange (ETDEWEB)

Joslyn, C.

1996-08-01

A conceptual analysis of the semiotic nature of control is provided with the goal of elucidating its nature in complex systems. Control is identified as a canonical form of semiotic relation of a system to its environment. As a form of constraint between a system and its environment, its necessary and sufficient conditions are established, and the stabilities resulting from control are distinguished from other forms of stability. These result from the presence of semantic coding relations, and thus the class of control systems is hypothesized to be equivalent to that of semiotic systems. Control systems are contrasted with models, which, while they have the same measurement functions as control systems, do not necessarily require semantic relations because of the lack of the requirement of an interpreter. A hybrid construction of models in control systems is detailed. Towards the goal of considering the nature of control in complex systems, the possible relations among collections of control systems are considered. Powers arguments on conflict among control systems and the possible nature of control in social systems are reviewed, and reconsidered based on our observations about hierarchical control. Finally, we discuss the necessary semantic functions which must be present in complex systems for control in this sense to be present at all.

18. Stability of rotor systems: A complex modelling approach

DEFF Research Database (Denmark)

Kliem, Wolfhard; Pommer, Christian; Stoustrup, Jakob

1998-01-01

The dynamics of a large class of rotor systems can be modelled by a linearized complex matrix differential equation of second order, Mz + (D + iG)(z) over dot + (K + iN)z = 0, where the system matrices M, D, G, K and N are real symmetric. Moreover M and K are assumed to be positive definite and D...... approach applying bounds of appropriate Rayleigh quotients. The rotor systems tested are: a simple Laval rotor, a Laval rotor with additional elasticity and damping in the bearings, and a number of rotor systems with complex symmetric 4 x 4 randomly generated matrices.......The dynamics of a large class of rotor systems can be modelled by a linearized complex matrix differential equation of second order, Mz + (D + iG)(z) over dot + (K + iN)z = 0, where the system matrices M, D, G, K and N are real symmetric. Moreover M and K are assumed to be positive definite and D...

19. Surface complexation modeling of zinc sorption onto ferrihydrite.

Science.gov (United States)

Dyer, James A; Trivedi, Paras; Scrivner, Noel C; Sparks, Donald L

2004-02-01

A previous study involving lead(II) [Pb(II)] sorption onto ferrihydrite over a wide range of conditions highlighted the advantages of combining molecular- and macroscopic-scale investigations with surface complexation modeling to predict Pb(II) speciation and partitioning in aqueous systems. In this work, an extensive collection of new macroscopic and spectroscopic data was used to assess the ability of the modified triple-layer model (TLM) to predict single-solute zinc(II) [Zn(II)] sorption onto 2-line ferrihydrite in NaNO(3) solutions as a function of pH, ionic strength, and concentration. Regression of constant-pH isotherm data, together with potentiometric titration and pH edge data, was a much more rigorous test of the modified TLM than fitting pH edge data alone. When coupled with valuable input from spectroscopic analyses, good fits of the isotherm data were obtained with a one-species, one-Zn-sorption-site model using the bidentate-mononuclear surface complex, (triple bond FeO)(2)Zn; however, surprisingly, both the density of Zn(II) sorption sites and the value of the best-fit equilibrium "constant" for the bidentate-mononuclear complex had to be adjusted with pH to adequately fit the isotherm data. Although spectroscopy provided some evidence for multinuclear surface complex formation at surface loadings approaching site saturation at pH >/=6.5, the assumption of a bidentate-mononuclear surface complex provided acceptable fits of the sorption data over the entire range of conditions studied. Regressing edge data in the absence of isotherm and spectroscopic data resulted in a fair number of surface-species/site-type combinations that provided acceptable fits of the edge data, but unacceptable fits of the isotherm data. A linear relationship between logK((triple bond FeO)2Zn) and pH was found, given by logK((triple bond FeO)2Znat1g/l)=2.058 (pH)-6.131. In addition, a surface activity coefficient term was introduced to the model to reduce the ionic strength

20. Latent variable modeling%建立隐性变量模型

Institute of Scientific and Technical Information of China (English)

蔡力

2012-01-01

@@ A latent variable model, as the name suggests,is a statistical model that contains latent, that is, unobserved, variables.Their roots go back to Spearman's 1904 seminal work[1] on factor analysis,which is arguably the first well-articulated latent variable model to be widely used in psychology, mental health research, and allied disciplines.Because of the association of factor analysis with early studies of human intelligence, the fact that key variables in a statistical model are, on occasion, unobserved has been a point of lingering contention and controversy.The reader is assured, however, that a latent variable,defined in the broadest manner, is no more mysterious than an error term in a normal theory linear regression model or a random effect in a mixed model.