Fisher, Stephen D
1999-01-01
The most important topics in the theory and application of complex variables receive a thorough, coherent treatment in this introductory text. Intended for undergraduates or graduate students in science, mathematics, and engineering, this volume features hundreds of solved examples, exercises, and applications designed to foster a complete understanding of complex variables as well as an appreciation of their mathematical beauty and elegance. Prerequisites are minimal; a three-semester course in calculus will suffice to prepare students for discussions of these topics: the complex plane, basic
Taylor, Joseph L
2011-01-01
The text covers a broad spectrum between basic and advanced complex variables on the one hand and between theoretical and applied or computational material on the other hand. With careful selection of the emphasis put on the various sections, examples, and exercises, the book can be used in a one- or two-semester course for undergraduate mathematics majors, a one-semester course for engineering or physics majors, or a one-semester course for first-year mathematics graduate students. It has been tested in all three settings at the University of Utah. The exposition is clear, concise, and lively
Flanigan, Francis J
2010-01-01
A caution to mathematics professors: Complex Variables does not follow conventional outlines of course material. One reviewer noting its originality wrote: ""A standard text is often preferred [to a superior text like this] because the professor knows the order of topics and the problems, and doesn't really have to pay attention to the text. He can go to class without preparation."" Not so here-Dr. Flanigan treats this most important field of contemporary mathematics in a most unusual way. While all the material for an advanced undergraduate or first-year graduate course is covered, discussion
Complex variables I essentials
Solomon, Alan D
2013-01-01
REA's Essentials provide quick and easy access to critical information in a variety of different fields, ranging from the most basic to the most advanced. As its name implies, these concise, comprehensive study guides summarize the essentials of the field covered. Essentials are helpful when preparing for exams, doing homework and will remain a lasting reference source for students, teachers, and professionals. Complex Variables I includes functions of a complex variable, elementary complex functions, integrals of complex functions in the complex plane, sequences and series, and poles and r
Anwer Khurshid
2012-07-01
Full Text Available Normal 0 false false false EN-US X-NONE X-NONE In this paper, it is shown that a complex multivariate random variable is a complex multivariate normal random variable of dimensionality if and only if all nondegenerate complex linear combinations of have a complex univariate normal distribution. The characteristic function of has been derived, and simpler forms of some theorems have been given using this characterization theorem without assuming that the variance-covariance matrix of the vector is Hermitian positive definite. Marginal distributions of have been given. In addition, a complex multivariate t-distribution has been defined and the density derived. A characterization of the complex multivariate t-distribution is given. A few possible uses of this distribution have been suggested.
Comparing rainfall variability, model complexity and hydrological response at the intra-event scale
Cristiano, Elena; ten Veldhuis, Marie-claire; Ochoa-Rodriguez, Susana; van de Giesen, Nick
2017-04-01
The high variability in space and time of rainfall is one of the main aspects that influence hydrological response and generation of pluvial flooding. This phenomenon has a bigger impact in urban areas, where response is usually faster and flow peaks are typically higher, due to the high degree of imperviousness. Previous researchers have investigated sensitivity of urban hydrodynamic models to rainfall space-time resolution as well as interactions with model structure and resolution. They showed that finding a proper match between rainfall resolution and model complexity is important and that sensitivity increases for smaller urban catchment scales. Results also showed high variability in hydrological response sensitivity, the origins of which remain poorly understood. In this work, we investigate the interaction between rainfall input variability and model structure and scale at high resolution, i.e. 1-15 minutes in time and 100m to 3 km in space. Apart from studying summary statistics such as relative peak flow errors and coefficient of determination, we look into characteristics of response hydrographs to find explanations for response variability in relation to catchment properties as well storm event characteristics (e.g. storm scale and movement, single-peak versus multi-peak events). The aim is to identify general relations between storm temporal and spatial scale and catchment scale in explaining variability of hydrological response. Analyses are conducted for the Cranbrook catchment (London, UK), using 3 hydrodynamic models set up in InfoWorks ICM: a low resolution semi-distributed (SD1) model, a high resolution semi-distributed (SD2) model and a fully distributed (FD) model. These models represent the spatial variability of the land in different ways: semi-distributed models divide the surface in subcatchments, each of them modelled in a lumped way (51 subcatchment for the S model and 4409 subcatchments for the SD model), while the fully distributed
Dettman, John W
1965-01-01
Analytic function theory is a traditional subject going back to Cauchy and Riemann in the 19th century. Once the exclusive province of advanced mathematics students, its applications have proven vital to today's physicists and engineers. In this highly regarded work, Professor John W. Dettman offers a clear, well-organized overview of the subject and various applications - making the often-perplexing study of analytic functions of complex variables more accessible to a wider audience. The first half of Applied Complex Variables, designed for sequential study, is a step-by-step treatment of fun
Killingbeck, John P [Mathematics Department, University of Hull, Hull HU6 7RX (United Kingdom); Grosjean, Alain [Laboratoire d' Astrophysique de l' Observatoire de Besancon (CNRS, UPRES-A 6091), 41 bis Avenue de l' Observatoire, BP 1615, 25010 Besancon Cedex (France); Jolicard, Georges [Laboratoire d' Astrophysique de l' Observatoire de Besancon (CNRS, UPRES-A 6091), 41 bis Avenue de l' Observatoire, BP 1615, 25010 Besancon Cedex (France)
2004-08-13
Complex variable hypervirial perturbation theory is applied to the case of oscillator and Coulomb potentials perturbed by a single term potential of the form Vx{sup n} or Vr{sup n}, respectively. The trial calculations reported show that this approach can produce accurate complex energies for resonant states via a simple and speedy calculation and can also be useful in studies of PT symmetry and tunnelling resonance effects. (addendum)
Porta, Alberto; Bari, Vlasta; Ranuzzi, Giovanni; De Maria, Beatrice; Baselli, Giuseppe
2017-09-01
We propose a multiscale complexity (MSC) method assessing irregularity in assigned frequency bands and being appropriate for analyzing the short time series. It is grounded on the identification of the coefficients of an autoregressive model, on the computation of the mean position of the poles generating the components of the power spectral density in an assigned frequency band, and on the assessment of its distance from the unit circle in the complex plane. The MSC method was tested on simulations and applied to the short heart period (HP) variability series recorded during graded head-up tilt in 17 subjects (age from 21 to 54 years, median = 28 years, 7 females) and during paced breathing protocols in 19 subjects (age from 27 to 35 years, median = 31 years, 11 females) to assess the contribution of time scales typical of the cardiac autonomic control, namely in low frequency (LF, from 0.04 to 0.15 Hz) and high frequency (HF, from 0.15 to 0.5 Hz) bands to the complexity of the cardiac regulation. The proposed MSC technique was compared to a traditional model-free multiscale method grounded on information theory, i.e., multiscale entropy (MSE). The approach suggests that the reduction of HP variability complexity observed during graded head-up tilt is due to a regularization of the HP fluctuations in LF band via a possible intervention of sympathetic control and the decrement of HP variability complexity observed during slow breathing is the result of the regularization of the HP variations in both LF and HF bands, thus implying the action of physiological mechanisms working at time scales even different from that of respiration. MSE did not distinguish experimental conditions at time scales larger than 1. Over a short time series MSC allows a more insightful association between cardiac control complexity and physiological mechanisms modulating cardiac rhythm compared to a more traditional tool such as MSE.
Johnson, Anthony N; Hromadka, T V
2015-01-01
The Laplace equation that results from specifying either the normal or tangential force equilibrium equation in terms of the warping functions or its conjugate can be modeled as a complex variable boundary element method or CVBEM mixed boundary problem. The CVBEM is a well-known numerical technique that can provide solutions to potential value problems in two or more dimensions by the use of an approximation function that is derived from the Cauchy Integral in complex analysis. This paper highlights three customizations to the technique.•A least squares approach to modeling the complex-valued approximation function will be compared and analyzed to determine if modeling error on the boundary can be reduced without the need to find and evaluated additional linearly independent complex functions.•The nodal point locations will be moved outside the problem domain.•Contour and streamline plots representing the warping function and its complementary conjugate are generated simultaneously from the complex-valued approximating function.
Complex variables II essentials
Solomon, Alan D
2013-01-01
REA's Essentials provide quick and easy access to critical information in a variety of different fields, ranging from the most basic to the most advanced. As its name implies, these concise, comprehensive study guides summarize the essentials of the field covered. Essentials are helpful when preparing for exams, doing homework and will remain a lasting reference source for students, teachers, and professionals. Complex Variables II includes elementary mappings and Mobius transformation, mappings by general functions, conformal mappings and harmonic functions, applying complex functions to a
T. Friedrich
2009-07-01
Full Text Available The effect of orbital variations on simulated millennial-scale variability of the Atlantic Meridional Overturning Circulation (AMOC is studied using the earth system model of intermediate complexity LOVECLIM. It is found that for present-day topographic boundary conditions low obliquity values (~22.1° favor the triggering of internally generated millennial-scale variability in the North Atlantic region. Reducing the obliquity leads to changes of the pause-pulse ratio of the corresponding AMOC oscillations. Stochastic excitations of the density-driven overturning circulation in the Nordic Seas can create regional sea-ice anomalies and a subsequent reorganization of the atmospheric circulation. The resulting remote atmospheric anomalies over the Hudson Bay can release freshwater pulses into the Labrador Sea leading to a subsequent reduction of convective activity. The millennial-scale AMOC oscillations disappear if LGM bathymetry (with closed Hudson Bay is prescribed. Furthermore, our study documents the marine and terrestrial carbon cycle response to millennial-scale AMOC variability. Our model results support the notion that stadial regimes in the North Atlantic are accompanied by relatively high levels of oxygen in thermocline and intermediate waters off California – in agreement with paleo-proxy data.
Holocene glacier variability: three case studies using an intermediate-complexity climate model
Weber, S.L.; Oerlemans, J.
2003-01-01
Synthetic glacier length records are generated for the Holocene epoch using a process-based glacier model coupled to the intermediate-complexity climate model ECBilt. The glacier model consists of a massbalance component and an ice-flow component. The climate model is forced by the insolation change
Complex Variables in Secondary Schools
Dwyer, Jerry; Moskal, Barbara; Duke, Billy; Wilhelm, Jennifer
2007-01-01
This article describes the work of outreach mathematicians introducing the topic of complex variables to eighth and ninth grade students (13- to 15-year-olds) in the US. Complex variables is an area of mathematics that is not typically studied at secondary level. The authors developed seven lessons designed to stimulate students' interest in…
Juckem, Paul F.; Clark, Brian R.; Feinstein, Daniel T.
2017-05-04
The U.S. Geological Survey, National Water-Quality Assessment seeks to map estimated intrinsic susceptibility of the glacial aquifer system of the conterminous United States. Improved understanding of the hydrogeologic characteristics that explain spatial patterns of intrinsic susceptibility, commonly inferred from estimates of groundwater age distributions, is sought so that methods used for the estimation process are properly equipped. An important step beyond identifying relevant hydrogeologic datasets, such as glacial geology maps, is to evaluate how incorporation of these resources into process-based models using differing levels of detail could affect resulting simulations of groundwater age distributions and, thus, estimates of intrinsic susceptibility.This report describes the construction and calibration of three groundwater-flow models of northeastern Wisconsin that were developed with differing levels of complexity to provide a framework for subsequent evaluations of the effects of process-based model complexity on estimations of groundwater age distributions for withdrawal wells and streams. Preliminary assessments, which focused on the effects of model complexity on simulated water levels and base flows in the glacial aquifer system, illustrate that simulation of vertical gradients using multiple model layers improves simulated heads more in low-permeability units than in high-permeability units. Moreover, simulation of heterogeneous hydraulic conductivity fields in coarse-grained and some fine-grained glacial materials produced a larger improvement in simulated water levels in the glacial aquifer system compared with simulation of uniform hydraulic conductivity within zones. The relation between base flows and model complexity was less clear; however, the relation generally seemed to follow a similar pattern as water levels. Although increased model complexity resulted in improved calibrations, future application of the models using simulated particle
Ancestry inference in complex admixtures via variable-length Markov chain linkage models.
Rodriguez, Jesse M; Bercovici, Sivan; Elmore, Megan; Batzoglou, Serafim
2013-03-01
Inferring the ancestral origin of chromosomal segments in admixed individuals is key for genetic applications, ranging from analyzing population demographics and history, to mapping disease genes. Previous methods addressed ancestry inference by using either weak models of linkage disequilibrium, or large models that make explicit use of ancestral haplotypes. In this paper we introduce ALLOY, an efficient method that incorporates generalized, but highly expressive, linkage disequilibrium models. ALLOY applies a factorial hidden Markov model to capture the parallel process producing the maternal and paternal admixed haplotypes, and models the background linkage disequilibrium in the ancestral populations via an inhomogeneous variable-length Markov chain. We test ALLOY in a broad range of scenarios ranging from recent to ancient admixtures with up to four ancestral populations. We show that ALLOY outperforms the previous state of the art, and is robust to uncertainties in model parameters.
Surface Complexation Modeling in Variable Charge Soils: Prediction of Cadmium Adsorption
Giuliano Marchi; Cesar Crispim Vilar; George O’Connor; Letuzia Maria de Oliveira; Adriana Reatto; Thomaz Adolph Rein
2015-01-01
ABSTRACT Intrinsic equilibrium constants for 22 representative Brazilian Oxisols were estimated from a cadmium adsorption experiment. Equilibrium constants were fitted to two surface complexation models: diffuse layer and constant capacitance. Intrinsic equilibrium constants were optimized by FITEQL and by hand calculation using Visual MINTEQ in sweep mode, and Excel spreadsheets. Data from both models were incorporated into Visual MINTEQ. Constants estimated by FITEQL and incorporated in Vis...
Giuliano Marchi
2015-10-01
Full Text Available ABSTRACT Intrinsic equilibrium constants of 17 representative Brazilian Oxisols were estimated from potentiometric titration measuring the adsorption of H+ and OH− on amphoteric surfaces in suspensions of varying ionic strength. Equilibrium constants were fitted to two surface complexation models: diffuse layer and constant capacitance. The former was fitted by calculating total site concentration from curve fitting estimates and pH-extrapolation of the intrinsic equilibrium constants to the PZNPC (hand calculation, considering one and two reactive sites, and by the FITEQL software. The latter was fitted only by FITEQL, with one reactive site. Soil chemical and physical properties were correlated to the intrinsic equilibrium constants. Both surface complexation models satisfactorily fit our experimental data, but for results at low ionic strength, optimization did not converge in FITEQL. Data were incorporated in Visual MINTEQ and they provide a modeling system that can predict protonation-dissociation reactions in the soil surface under changing environmental conditions.
Surface Complexation Modeling in Variable Charge Soils: Prediction of Cadmium Adsorption
Giuliano Marchi
2015-10-01
Full Text Available ABSTRACT Intrinsic equilibrium constants for 22 representative Brazilian Oxisols were estimated from a cadmium adsorption experiment. Equilibrium constants were fitted to two surface complexation models: diffuse layer and constant capacitance. Intrinsic equilibrium constants were optimized by FITEQL and by hand calculation using Visual MINTEQ in sweep mode, and Excel spreadsheets. Data from both models were incorporated into Visual MINTEQ. Constants estimated by FITEQL and incorporated in Visual MINTEQ software failed to predict observed data accurately. However, FITEQL raw output data rendered good results when predicted values were directly compared with observed values, instead of incorporating the estimated constants into Visual MINTEQ. Intrinsic equilibrium constants optimized by hand calculation and incorporated in Visual MINTEQ reliably predicted Cd adsorption reactions on soil surfaces under changing environmental conditions.
Aerodynamic optimization of an HSCT configuration using variable-complexity modeling
Hutchison, M. G.; Mason, W. H.; Grossman, B.; Haftka, R. T.
1993-01-01
An approach to aerodynamic configuration optimization is presented for the high-speed civil transport (HSCT). A method to parameterize the wing shape, fuselage shape and nacelle placement is described. Variable-complexity design strategies are used to combine conceptual and preliminary-level design approaches, both to preserve interdisciplinary design influences and to reduce computational expense. Conceptual-design-level (approximate) methods are used to estimate aircraft weight, supersonic wave drag and drag due to lift, and landing angle of attack. The drag due to lift, wave drag and landing angle of attack are also evaluated using more detailed, preliminary-design-level techniques. New, approximate methods for estimating supersonic wave drag and drag due to lift are described. The methodology is applied to the minimization of the gross weight of an HSCT that flies at Mach 2.4 with a range of 5500 n.mi. Results are presented for wing planform shape optimization and for combined wing and fuselage optimization with nacelle placement. Case studies include both all-metal wings and advanced composite wings.
Complex variable methods in elasticity
England, A H
2003-01-01
The plane strain and generalized plane stress boundary value problems of linear elasticity are the focus of this graduate-level text, which formulates and solves these problems by employing complex variable theory. The text presents detailed descriptions of the three basic methods that rely on series representation, Cauchy integral representation, and the solution via continuation. Its five-part treatment covers functions of a complex variable, the basic equations of two-dimensional elasticity, plane and half-plane problems, regions with circular boundaries, and regions with curvilinear bounda
Several topics in complex variables
Smit, I.M.
2015-01-01
This thesis is based on three articles in the field of Several Complex Variables. The first article, which is joint work with M. El Kadiri, defines and studies the concept of maximality for plurifinely plurisubharmonic functions. Its main result is that a finite plurifinely plurisubharmonic function
Three-Dimensional Complex Variables
Martin, E. Dale
1988-01-01
Report presents new theory of analytic functions of three-dimensional complex variables. While three-dimensional system subject to more limitations and more difficult to use than the two-dimensional system, useful in analysis of three-dimensional fluid flows, electrostatic potentials, and other phenomena involving harmonic functions.
Shi, Yuhan; Duan, Qingyun
2017-04-01
Earth System Models (ESMs) are an important tool for understanding past climate evolution and for predicting future climate change. However, the ESM model outputs contain significant uncertainties. A major source of uncertainties is from the specification of model parameters. Specification of ESM model parameters is complicated as most ESMs contain a large number of model parameters. Further, ESMs simulate many different climatic variables and are computationally expensive to run. In this study, we intend to use a design of experiment approach to evaluate the parametric sensitivities of different climatic variables simulated by LOVECLIM, an Earth System Model of Intermediate Complexity (EMIC). Three sensitivity analysis methods are used to explore the sensitivities of different outputs of LOVECLIM, such as global mean temperature, global land/ocean precipitation and evaporation to different model parameters. A newly developed software package, Uncertainty Quantification Python Laboratory (UQ-PyL), is employed to execute the sensitivity analysis. A total of 23 adjustable parameters of the model were considered. This presentation will present the preliminary results of parameter sensitivity analysis, which, in turn, should form the basis for further optimization of the model parameters to better simulate the climate system.
Modelling effects of candidate genes on complex traits as variables over time.
Szyda, J; Komisarek, J; Antkowiak, I
2014-06-01
In this study, changes in gene effects for milk production traits were analysed over time. Such changes can be expected by investigating daily milk production yields, which increase during the early phase of lactation and then decrease. Moreover, additive polygenic effects on milk production traits estimated in other studies differed throughout the 305 days of lactation, clearly indicating changes in the genetic determination of milk production throughout this period. Our study focused on particular candidate genes known to affect milk production traits and on the estimation of potential changes in the magnitude of their effects over time. With two independent data sets from Holstein-Friesian and Jersey breeds, we show that the effects of the DGAT1:p.Lys232Ala polymorphism on fat and protein content in milk change during lactation. The other candidate genes considered in this study (leptin receptor, leptin and butyrophilin, subfamily 1, member A1) exhibited effects that vary across time, but these could be observed in only one of the breeds. Longitudinal modelling of SNP effects enables more precise description of the genetic background underlying the variation of complex traits. A gene that changes the magnitude or even the sign of its effect cannot be detected by a time-averaged model. This was particularly evident when analysing the effect of butyrophilin, missed by many previous studies, which considered butyrophilin's effect as constant over time.
THE EFFECTS OF MODELING AND FEEDBACK VARIABLES ON THE ACQUISITION OF A COMPLEX TEACHING STRATEGY.
ORME, MICHAEL E.J.; AND OTHERS
THE RELATIVE EFFECTIVENESS OF SIX MODES OF TRAINING TEACHERS TO USE PROBING QUESTIONS WAS INVESTIGATED. THE MODES INVOLVED SYMBOLIC MODELING, PERCEPTUAL MODELING, OR BOTH, COUPLED WITH FEEDBACK. AFTER RATINGS OF PERTINENT BEHAVIOR IN A 5-MINUTE LESSON WERE COLLECTED AS PRETRAINING MEASURES, STANFORD TEACHER INTERNS WERE RANDOMLY DISTRIBUTED AMONG…
Tritthart, Michael; Welti, Nina; Bondar-Kunze, Elisabeth; Pinay, Gilles; Hein, Thomas; Habersack, Helmut
2011-01-01
The hydrological exchange conditions strongly determine the biogeochemical dynamics in river systems. More specifically, the connectivity of surface waters between main channels and floodplains is directly controlling the delivery of organic matter and nutrients into the floodplains, where biogeochemical processes recycle them with high rates of activity. Hence, an in-depth understanding of the connectivity patterns between main channel and floodplains is important for the modelling of potential gas emissions in floodplain landscapes. A modelling framework that combines steady-state hydrodynamic simulations with long-term discharge hydrographs was developed to calculate water depths as well as statistical probabilities and event durations for every node of a computation mesh being connected to the main river. The modelling framework was applied to two study sites in the floodplains of the Austrian Danube River, East of Vienna. Validation of modelled flood events showed good agreement with gauge readings. Together with measured sediment properties, results of the validated connectivity model were used as basis for a predictive model yielding patterns of potential microbial respiration based on the best fit between characteristics of a number of sampling sites and the corresponding modelled parameters. Hot spots of potential microbial respiration were found in areas of lower connectivity if connected during higher discharges and areas of high water depths. PMID:27667961
T. Friedrich
2010-08-01
Full Text Available The mechanism triggering centennial-to-millennial-scale variability of the Atlantic Meridional Overturning Circulation (AMOC in the earth system model of intermediate complexity LOVECLIM is investigated. It is found that for several climate boundary conditions such as low obliquity values (~22.1° or LGM-albedo, internally generated centennial-to-millennial-scale variability occurs in the North Atlantic region. Stochastic excitations of the density-driven overturning circulation in the Nordic Seas can create regional sea-ice anomalies and a subsequent reorganization of the atmospheric circulation. The resulting remote atmospheric anomalies over the Hudson Bay can release freshwater pulses into the Labrador Sea and significantly increase snow fall in this region leading to a subsequent reduction of convective activity. The millennial-scale AMOC oscillations disappear if LGM bathymetry (with closed Hudson Bay is prescribed or if freshwater pulses are suppressed artificially. Furthermore, our study documents the process of the AMOC recovery as well as the global marine and terrestrial carbon cycle response to centennial-to-millennial-scale AMOC variability.
Brown, T.W.
2010-11-15
The same complex matrix model calculates both tachyon scattering for the c=1 non-critical string at the self-dual radius and certain correlation functions of half-BPS operators in N=4 super- Yang-Mills. It is dual to another complex matrix model where the couplings of the first model are encoded in the Kontsevich-like variables of the second. The duality between the theories is mirrored by the duality of their Feynman diagrams. Analogously to the Hermitian Kontsevich- Penner model, the correlation functions of the second model can be written as sums over discrete points in subspaces of the moduli space of punctured Riemann surfaces. (orig.)
Mavris, Dimitri; Osburg, Jan
2005-01-01
An important enabler of the new national Vision for Space Exploration is the ability to rapidly and efficiently develop optimized concepts for the manifold future space missions that this effort calls for. The design of such complex systems requires a tight integration of all the engineering disciplines involved, in an environment that fosters interaction and collaboration. The research performed under this grant explored areas where the space systems design process can be enhanced: by integrating risk models into the early stages of the design process, and by including rapid-turnaround variable-fidelity tools for key disciplines. Enabling early assessment of mission risk will allow designers to perform trades between risk and design performance during the initial design space exploration. Entry into planetary atmospheres will require an increased emphasis of the critical disciplines of aero- and thermodynamics. This necessitates the pulling forward of EDL disciplinary expertise into the early stage of the design process. Radiation can have a large potential impact on overall mission designs, in particular for the planned nuclear-powered robotic missions under Project Prometheus and for long-duration manned missions to the Moon, Mars and beyond under Project Constellation. This requires that radiation and associated risk and hazards be assessed and mitigated at the earliest stages of the design process. Hence, RPS is another discipline needed to enhance the engineering competencies of conceptual design teams. Researchers collaborated closely with NASA experts in those disciplines, and in overall space systems design, at Langley Research Center and at the Jet Propulsion Laboratory. This report documents the results of this initial effort.
Elements Of Theory Of Multidimensional Complex Variables
Martin, E. Dale
1993-01-01
Two reports describe elements of theory of multidimensional complex variables, with emphasis on three dimensions. First report introduces general theory. Second, presents further developments in theory of analytic functions of single three-dimensional variable and applies theory to representation of ideal flows. Results of preliminary studies suggest analytic functions of new three-dimensional complex variables useful in numerous applications, including representing of three-dimensional flows and potentials.
Analytic functions of several complex variables
Gunning, Robert C
2009-01-01
The theory of analytic functions of several complex variables enjoyed a period of remarkable development in the middle part of the twentieth century. After initial successes by Poincaré and others in the late 19th and early 20th centuries, the theory encountered obstacles that prevented it from growing quickly into an analogue of the theory for functions of one complex variable. Beginning in the 1930s, initially through the work of Oka, then H. Cartan, and continuing with the work of Grauert, Remmert, and others, new tools were introduced into the theory of several complex variables that resol
Harmonic and complex analysis in several variables
Krantz, Steven G
2017-01-01
Authored by a ranking authority in harmonic analysis of several complex variables, this book embodies a state-of-the-art entrée at the intersection of two important fields of research: complex analysis and harmonic analysis. Written with the graduate student in mind, it is assumed that the reader has familiarity with the basics of complex analysis of one and several complex variables as well as with real and functional analysis. The monograph is largely self-contained and develops the harmonic analysis of several complex variables from the first principles. The text includes copious examples, explanations, an exhaustive bibliography for further reading, and figures that illustrate the geometric nature of the subject. Each chapter ends with an exercise set. Additionally, each chapter begins with a prologue, introducing the reader to the subject matter that follows; capsules presented in each section give perspective and a spirited launch to the segment; preludes help put ideas into context. Mathematicians and...
Smith, C. B.
1982-01-01
The Fymat analytic inversion method for retrieving a particle-area distribution function from anomalous diffraction multispectral extinction data and total area is generalized to the case of a variable complex refractive index m(lambda) near unity depending on spectral wavelength lambda. Inversion tests are presented for a water-haze aerosol model. An upper-phase shift limit of 5 pi/2 retrieved an accurate peak area distribution profile. Analytical corrections using both the total number and area improved the inversion.
Boucher, J; Ayoub, K I; Cousineau, S M; Ahmadpanah, M; Rakotondrazafy, C; Harchani, N; Andreu, D; Montagner, L; Marin, M
1999-01-01
The analog behavioral modeling must constitute a privileged axis of research for a global simulation of systems and micro-systems. This paper presents a research/education (R&E) methodology which has been developed by the authors as a result of many years of experience in the domains of electronic components, circuits and systems, in different university and industrial research laboratories. It concerns the entire constitutive analog functions, used in the processing of energy and information, with different abstraction levels, extending from a simple component to complex macro-functions used in system electronics. (10 refs).
Analytic complexity of functions of two variables
Beloshapka, V. K.
2007-09-01
The definition of analytic complexity of an analytic function of two variables is given. It is proved that the class of functions of a chosen complexity is a differentialalgebraic set. A differential polynomial defining the functions of first class is constructed. An algorithm for obtaining relations defining an arbitrary class is described. Examples of functions are given whose order of complexity is equal to zero, one, two, and infinity. It is shown that the formal order of complexity of the Cardano and Ferrari formulas is significantly higher than their analytic complexity. The complexity classes turn out to be invariant with respect to a certain infinite-dimensional transformation pseudogroup. In this connection, we describe the orbits of the action of this pseudogroup in the jets of orders one, two, and three. The notion of complexity order is extended to plane (or “planar”) 3-webs. It is discovered that webs of complexity order one are the hexagonal webs. Some problems are posed.
Korean Conference on Several Complex Variables
Byun, Jisoo; Gaussier, Hervé; Hirachi, Kengo; Kim, Kang-Tae; Shcherbina, Nikolay
2015-01-01
This volume includes 28 chapters by authors who are leading researchers of the world describing many of the up-to-date aspects in the field of several complex variables (SCV). These contributions are based upon their presentations at the 10th Korean Conference on Several Complex Variables (KSCV10), held as a satellite conference to the International Congress of Mathematicians (ICM) 2014 in Seoul, Korea. SCV has been the term for multidimensional complex analysis, one of the central research areas in mathematics. Studies over time have revealed a variety of rich, intriguing, new knowledge in complex analysis and geometry of analytic spaces and holomorphic functions which were "hidden" in the case of complex dimension one. These new theories have significant intersections with algebraic geometry, differential geometry, partial differential equations, dynamics, functional analysis and operator theory, and sheaves and cohomology, as well as the traditional analysis of holomorphic functions in all dimensions. This...
Function theory of several complex variables
Krantz, Steven G
2001-01-01
The theory of several complex variables can be studied from several different perspectives. In this book, Steven Krantz approaches the subject from the point of view of a classical analyst, emphasizing its function-theoretic aspects. He has taken particular care to write the book with the student in mind, with uniformly extensive and helpful explanations, numerous examples, and plentiful exercises of varying difficulty. In the spirit of a student-oriented text, Krantz begins with an introduction to the subject, including an insightful comparison of analysis of several complex variables with th
Lectures on counterexamples in several complex variables
Fornæss, John Erik
2007-01-01
Counterexamples are remarkably effective for understanding the meaning, and the limitations, of mathematical results. Fornæss and Stensønes look at some of the major ideas of several complex variables by considering counterexamples to what might seem like reasonable variations or generalizations. The first part of the book reviews some of the basics of the theory, in a self-contained introduction to several complex variables. The counterexamples cover a variety of important topics: the Levi problem, plurisubharmonic functions, Monge-Ampère equations, CR geometry, function theory, and the \\bar\\
Applied complex variables for scientists and engineers
Kwok, Yue Kuen
2010-01-01
This introduction to complex variable methods begins by carefully defining complex numbers and analytic functions, and proceeds to give accounts of complex integration, Taylor series, singularities, residues and mappings. Both algebraic and geometric tools are employed to provide the greatest understanding, with many diagrams illustrating the concepts introduced. The emphasis is laid on understanding the use of methods, rather than on rigorous proofs. Throughout the text, many of the important theoretical results in complex function theory are followed by relevant and vivid examples in physical sciences. This second edition now contains 350 stimulating exercises of high quality, with solutions given to many of them. Material has been updated and additional proofs on some of the important theorems in complex function theory are now included, e.g. the Weierstrass–Casorati theorem. The book is highly suitable for students wishing to learn the elements of complex analysis in an applied context.
Modeling Pacific Decadal Variability
Schneider, N.
2002-05-01
Hypotheses for decadal variability rely on the large thermal inertia of the ocean to sequester heat and provide the long memory of the climate system. Understanding decadal variability requires the study of the generation of ocean anomalies at decadal frequencies, the evolution of oceanic signals, and the response of the atmosphere to oceanic perturbations. A sample of studies relevant for Pacific decadal variability will be reviewed in this presentation. The ocean integrates air-sea flux anomalies that result from internal atmospheric variability or broad-band coupled processes such as ENSO, or are an intrinsic part of the decadal feedback loop. Anomalies of Ekman pumping lead to deflections of the ocean thermocline and accompanying changes of the ocean circulation; perturbations of surface layer heat and fresh water budgets cause anomalies of T/S characteristics of water masses. The former process leads to decadal variability due to the dynamical adjustment of the mid latitude gyres or thermocline circulation; the latter accounts for the low frequency climate variations by the slow propagation of anomalies in the thermocline from the mid-latitude outcrops to the equatorial upwelling regions. Coupled modeling studies and ocean model hindcasts suggest that the adjustment of the North Pacific gyres to variation of Ekman pumping causes low frequency variations of surface temperature in the Kuroshio-Oyashio extension region. These changes appear predictable a few years in advance, and affect the local upper ocean heat budget and precipitation. The majority of low frequency variance is explained by the ocean's response to stochastic atmospheric forcing, the additional variance explained by mid-latitude ocean to atmosphere feedbacks appears to be small. The coupling of subtropical and tropical regions by the equator-ward motion in the thermocline can support decadal anomalies by changes of its speed and path, or by transporting water mass anomalies to the equatorial
Simple driven chaotic oscillators with complex variables.
Marshall, Delmar; Sprott, J C
2009-03-01
Despite a search, no chaotic driven complex-variable oscillators of the form z+f(z)=e(iOmegat) or z+f(z)=e(iOmegat) are found, where f is a polynomial with real coefficients. It is shown that, for analytic functions f(z), driven complex-variable oscillators of the form z+f(z)=e(iOmegat) cannot have chaotic solutions. Seven simple driven chaotic oscillators of the form z+f(z,z)=e(iOmegat) with polynomial f(z,z) are given. Their chaotic attractors are displayed, and Lyapunov spectra are calculated. Attractors for two of the cases have symmetry across the x=-y line. The systems' behavior with Omega as a control parameter in the range of Omega=0.1-2.0 is examined, revealing cases of period doubling, intermittency, chaotic transients, and period adding as routes to chaos. Numerous cases of coexisting attractors are also observed.
Troch, P.A.A.; Paniconi, C.; Loon, van E.E.
2003-01-01
Hillslope response to rainfall remains one of the central problems of catchment hydrology. Flow processes in a one-dimensional sloping aquifer can be described by Boussinesq's hydraulic groundwater theory. Most hillslopes, however, have complex three-dimensional shapes that are characterized by thei
Modeling Shared Variables in VHDL
Madsen, Jan; Brage, Jens P.
1994-01-01
A set of concurrent processes communicating through shared variables is an often used model for hardware systems. This paper presents three modeling techniques for representing such shared variables in VHDL, depending on the acceptable constraints on accesses to the variables. Also a set of guide......A set of concurrent processes communicating through shared variables is an often used model for hardware systems. This paper presents three modeling techniques for representing such shared variables in VHDL, depending on the acceptable constraints on accesses to the variables. Also a set...
Partial differential equations in several complex variables
Chen, So-Chin
2001-01-01
This book is intended both as an introductory text and as a reference book for those interested in studying several complex variables in the context of partial differential equations. In the last few decades, significant progress has been made in the fields of Cauchy-Riemann and tangential Cauchy-Riemann operators. This book gives an up-to-date account of the theories for these equations and their applications. The background material in several complex variables is developed in the first three chapters, leading to the Levi problem. The next three chapters are devoted to the solvability and regularity of the Cauchy-Riemann equations using Hilbert space techniques. The authors provide a systematic study of the Cauchy-Riemann equations and the \\bar\\partial-Neumann problem, including L^2 existence theorems on pseudoconvex domains, \\frac 12-subelliptic estimates for the \\bar\\partial-Neumann problems on strongly pseudoconvex domains, global regularity of \\bar\\partial on more general pseudoconvex domains, boundary ...
Untangling complex dynamical systems via derivative-variable correlations
Levnaji, Zoran; Pikovsky, Arkady
2014-05-01
Inferring the internal interaction patterns of a complex dynamical system is a challenging problem. Traditional methods often rely on examining the correlations among the dynamical units. However, in systems such as transcription networks, one unit's variable is also correlated with the rate of change of another unit's variable. Inspired by this, we introduce the concept of derivative-variable correlation, and use it to design a new method of reconstructing complex systems (networks) from dynamical time series. Using a tunable observable as a parameter, the reconstruction of any system with known interaction functions is formulated via a simple matrix equation. We suggest a procedure aimed at optimizing the reconstruction from the time series of length comparable to the characteristic dynamical time scale. Our method also provides a reliable precision estimate. We illustrate the method's implementation via elementary dynamical models, and demonstrate its robustness to both model error and observation error.
Kevrekidis, Ioannis G. [Princeton Univ., NJ (United States)
2017-02-01
The work explored the linking of modern developing machine learning techniques (manifold learning and in particular diffusion maps) with traditional PDE modeling/discretization/scientific computation techniques via the equation-free methodology developed by the PI. The result (in addition to several PhD degrees, two of them by CSGF Fellows) was a sequence of strong developments - in part on the algorithmic side, linking data mining with scientific computing, and in part on applications, ranging from PDE discretizations to molecular dynamics and complex network dynamics.
MODELING SUPPLY CHAIN PERFORMANCE VARIABLES
Ashish Agarwal
2005-01-01
Full Text Available In order to understand the dynamic behavior of the variables that can play a major role in the performance improvement in a supply chain, a System Dynamics-based model is proposed. The model provides an effective framework for analyzing different variables affecting supply chain performance. Among different variables, a causal relationship among different variables has been identified. Variables emanating from performance measures such as gaps in customer satisfaction, cost minimization, lead-time reduction, service level improvement and quality improvement have been identified as goal-seeking loops. The proposed System Dynamics-based model analyzes the affect of dynamic behavior of variables for a period of 10 years on performance of case supply chain in auto business.
Classification of complex polynomial vector fields in one complex variable
Branner, Bodil; Dias, Kealey
2010-01-01
, the main result of the paper. This result is an extension and refinement of Douady et al. (Champs de vecteurs polynomiaux sur C. Unpublished manuscript) classification of the structurally stable polynomial vector fields. We further review some general concepts for completeness and show that vector fields......This paper classifies the global structure of monic and centred one-variable complex polynomial vector fields. The classification is achieved by means of combinatorial and analytic data. More specifically, given a polynomial vector field, we construct a combinatorial invariant, describing...... the topology, and a set of analytic invariants, describing the geometry. Conversely, given admissible combinatorial and analytic data sets, we show using surgery the existence of a unique monic and centred polynomial vector field realizing the given invariants. This is the content of the Structure Theorem...
Oleg Svatos
2013-01-01
Full Text Available In this paper we analyze complexity of time limits we can find especially in regulated processes of public administration. First we review the most popular process modeling languages. There is defined an example scenario based on the current Czech legislature which is then captured in discussed process modeling languages. Analysis shows that the contemporary process modeling languages support capturing of the time limit only partially. This causes troubles to analysts and unnecessary complexity of the models. Upon unsatisfying results of the contemporary process modeling languages we analyze the complexity of the time limits in greater detail and outline lifecycles of a time limit using the multiple dynamic generalizations pattern. As an alternative to the popular process modeling languages there is presented PSD process modeling language, which supports the defined lifecycles of a time limit natively and therefore allows keeping the models simple and easy to understand.
Solving the Inverse-Square Problem with Complex Variables
Gauthier, N.
2005-01-01
The equation of motion for a mass that moves under the influence of a central, inverse-square force is formulated and solved as a problem in complex variables. To find the solution, the constancy of angular momentum is first established using complex variables. Next, the complex position coordinate and complex velocity of the particle are assumed…
A system of three-dimensional complex variables
Martin, E. Dale
1986-01-01
Some results of a new theory of multidimensional complex variables are reported, including analytic functions of a three-dimensional (3-D) complex variable. Three-dimensional complex numbers are defined, including vector properties and rules of multiplication. The necessary conditions for a function of a 3-D variable to be analytic are given and shown to be analogous to the 2-D Cauchy-Riemann equations. A simple example also demonstrates the analogy between the newly defined 3-D complex velocity and 3-D complex potential and the corresponding ordinary complex velocity and complex potential in two dimensions.
Hybrid Unifying Variable Supernetwork Model
LIU; Qiang; FANG; Jin-qing; LI; Yong
2015-01-01
In order to compare new phenomenon of topology change,evolution,hybrid ratio and network characteristics of unified hybrid network theoretical model with unified hybrid supernetwork model,this paper constructed unified hybrid variable supernetwork model(HUVSM).The first layer introduces a hybrid ratio dr,the
Debating complexity in modeling
Hunt, Randall J.; Zheng, Chunmiao
1999-01-01
Complexity in modeling would seem to be an issue of universal importance throughout the geosciences, perhaps throughout all science, if the debate last year among groundwater modelers is any indication. During the discussion the following questions and observations made up the heart of the debate.
Foundations of the complex variable boundary element method
Hromadka, Theodore
2014-01-01
This book explains and examines the theoretical underpinnings of the Complex Variable Boundary Element Method (CVBEM) as applied to higher dimensions, providing the reader with the tools for extending and using the CVBEM in various applications. Relevant mathematics and principles are assembled and the reader is guided through the key topics necessary for an understanding of the development of the CVBEM in both the usual two- as well as three- or higher dimensions. In addition to this, problems are provided that build upon the material presented. The Complex Variable Boundary Element Method (CVBEM) is an approximation method useful for solving problems involving the Laplace equation in two dimensions. It has been shown to be a useful modelling technique for solving two-dimensional problems involving the Laplace or Poisson equations on arbitrary domains. The CVBEM has recently been extended to 3 or higher spatial dimensions, which enables the precision of the CVBEM in solving the Laplace equation to be now ava...
A complex variable meshless method for fracture problems
无
2006-01-01
Based on the moving least-square (MLS) approximation, the complex variable moving least-square approximation (CVMLS) is discussed in this paper. The complex variable moving least-square approximation cannot form ill-conditioned equations, and has greater precision and computational efficiency. Using the analytical solution near the tip of a crack, the trial functions in the complex variable moving least-square approxi- mation are extended, and the corresponding approximation function is obtained. And from the minimum potential energy principle, a complex variable meshless method for fracture problems is presented, and the formulae of the complex variable meshless method are obtained. The complex variable meshless method in this paper has greater precision and computational efficiency than the conventional meshless method. Some examples are given.
Boccara, Nino
2010-01-01
Modeling Complex Systems, 2nd Edition, explores the process of modeling complex systems, providing examples from such diverse fields as ecology, epidemiology, sociology, seismology, and economics. It illustrates how models of complex systems are built and provides indispensable mathematical tools for studying their dynamics. This vital introductory text is useful for advanced undergraduate students in various scientific disciplines, and serves as an important reference book for graduate students and young researchers. This enhanced second edition includes: . -recent research results and bibliographic references -extra footnotes which provide biographical information on cited scientists who have made significant contributions to the field -new and improved worked-out examples to aid a student’s comprehension of the content -exercises to challenge the reader and complement the material Nino Boccara is also the author of Essentials of Mathematica: With Applications to Mathematics and Physics (Springer, 2007).
Variability of Contact Process in Complex Networks
Gong, Kai; Yang, Hui; Shang, Mingsheng; 10.1063/1.3664403
2012-01-01
We study numerically how the structures of distinct networks influence the epidemic dynamics in contact process. We first find that the variability difference between homogeneous and heterogeneous networks is very narrow, although the heterogeneous structures can induce the lighter prevalence. Contrary to non-community networks, strong community structures can cause the secondary outbreak of prevalence and two peaks of variability appeared. Especially in the local community, the extraordinarily large variability in early stage of the outbreak makes the prediction of epidemic spreading hard. Importantly, the bridgeness plays a significant role in the predictability, meaning the further distance of the initial seed to the bridgeness, the less accurate the predictability is. Also, we investigate the effect of different disease reaction mechanisms on variability, and find that the different reaction mechanisms will result in the distinct variabilities at the end of epidemic spreading.
Modeling complexes of modeled proteins.
Anishchenko, Ivan; Kundrotas, Petras J; Vakser, Ilya A
2017-03-01
Structural characterization of proteins is essential for understanding life processes at the molecular level. However, only a fraction of known proteins have experimentally determined structures. This fraction is even smaller for protein-protein complexes. Thus, structural modeling of protein-protein interactions (docking) primarily has to rely on modeled structures of the individual proteins, which typically are less accurate than the experimentally determined ones. Such "double" modeling is the Grand Challenge of structural reconstruction of the interactome. Yet it remains so far largely untested in a systematic way. We present a comprehensive validation of template-based and free docking on a set of 165 complexes, where each protein model has six levels of structural accuracy, from 1 to 6 Å C(α) RMSD. Many template-based docking predictions fall into acceptable quality category, according to the CAPRI criteria, even for highly inaccurate proteins (5-6 Å RMSD), although the number of such models (and, consequently, the docking success rate) drops significantly for models with RMSD > 4 Å. The results show that the existing docking methodologies can be successfully applied to protein models with a broad range of structural accuracy, and the template-based docking is much less sensitive to inaccuracies of protein models than the free docking. Proteins 2017; 85:470-478. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Wharton, S. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Bulaevskaya, V. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Irons, Z. [Enel Green Power North America, Andover, MA (United States); Qualley, G. [Infigen Energy, Dallas, TX (United States); Newman, J. F. [Univ. of Oklahoma, Norman, OK (United States); Miller, W. O. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)
2015-09-28
The goal of our FY15 project was to explore the use of statistical models and high-resolution atmospheric input data to develop more accurate prediction models for turbine power generation. We modeled power for two operational wind farms in two regions of the country. The first site is a 235 MW wind farm in Northern Oklahoma with 140 GE 1.68 turbines. Our second site is a 38 MW wind farm in the Altamont Pass Region of Northern California with 38 Mitsubishi 1 MW turbines. The farms are very different in topography, climatology, and turbine technology; however, both occupy high wind resource areas in the U.S. and are representative of typical wind farms found in their respective areas.
Sporadic meteoroid complex: Modeling
Andreev, V.
2014-07-01
The distribution of the sporadic meteoroids flux density over the celestial sphere is the common form of representation of the meteoroids distribution in the vicinity of the Earth's orbit. The determination of the flux density of sporadic meteor bodies is Q(V,e,f) = Q_0 P_e(V) P(e,f) where V is the meteoroid velocity, e,f are the radiant coordinates, Q_0 is the meteoroid flux over whole celestial sphere, P_e(V) is the conditional velocity distributions and P(e,f) is the radiant distribution over the celestial sphere. The sporadic meteoroid complex model is analytical and based on heliocentric velocities and radiant distributions. The multi-mode character of the heliocentric velocity and radiant distributions follows from the analysis of meteor observational data. This fact points to a complicated structure of the sporadic meteoroid complex. It is the consequence of the plurality of the parent bodies and the origin mechanisms of the meteoroids. The meteoroid complex was divided into four groups for that reason and with a goal of more accurate modelling of velocities and radiant distributions. As the classifying parameter to determine the meteoroid membership in any group, we adopt the Tisserand invariant relative to Jupiter T_J = 1/a + 2 A_J^{-3/2} √{a (1 - e^2)} cos i and the meteoroid orbit inclination i. Two meteoroid groups relate to long-period and short-period comets. One meteoroid group is related to asteroids. The relationship to the last, fourth group is a problematic one. Then, we construct models of radiant and velocity distributions for each group. The analytical model for the whole sporadic meteoroid complex is the sum of the ones for each group.
Predictive Surface Complexation Modeling
Sverjensky, Dimitri A. [Johns Hopkins Univ., Baltimore, MD (United States). Dept. of Earth and Planetary Sciences
2016-11-29
Surface complexation plays an important role in the equilibria and kinetics of processes controlling the compositions of soilwaters and groundwaters, the fate of contaminants in groundwaters, and the subsurface storage of CO_{2} and nuclear waste. Over the last several decades, many dozens of individual experimental studies have addressed aspects of surface complexation that have contributed to an increased understanding of its role in natural systems. However, there has been no previous attempt to develop a model of surface complexation that can be used to link all the experimental studies in order to place them on a predictive basis. Overall, my research has successfully integrated the results of the work of many experimentalists published over several decades. For the first time in studies of the geochemistry of the mineral-water interface, a practical predictive capability for modeling has become available. The predictive correlations developed in my research now enable extrapolations of experimental studies to provide estimates of surface chemistry for systems not yet studied experimentally and for natural and anthropogenically perturbed systems.
Gait variability: methods, modeling and meaning
Hausdorff Jeffrey M
2005-07-01
Full Text Available Abstract The study of gait variability, the stride-to-stride fluctuations in walking, offers a complementary way of quantifying locomotion and its changes with aging and disease as well as a means of monitoring the effects of therapeutic interventions and rehabilitation. Previous work has suggested that measures of gait variability may be more closely related to falls, a serious consequence of many gait disorders, than are measures based on the mean values of other walking parameters. The Current JNER series presents nine reports on the results of recent investigations into gait variability. One novel method for collecting unconstrained, ambulatory data is reviewed, and a primer on analysis methods is presented along with a heuristic approach to summarizing variability measures. In addition, the first studies of gait variability in animal models of neurodegenerative disease are described, as is a mathematical model of human walking that characterizes certain complex (multifractal features of the motor control's pattern generator. Another investigation demonstrates that, whereas both healthy older controls and patients with a higher-level gait disorder walk more slowly in reduced lighting, only the latter's stride variability increases. Studies of the effects of dual tasks suggest that the regulation of the stride-to-stride fluctuations in stride width and stride time may be influenced by attention loading and may require cognitive input. Finally, a report of gait variability in over 500 subjects, probably the largest study of this kind, suggests how step width variability may relate to fall risk. Together, these studies provide new insights into the factors that regulate the stride-to-stride fluctuations in walking and pave the way for expanded research into the control of gait and the practical application of measures of gait variability in the clinical setting.
Polystochastic Models for Complexity
Iordache, Octavian
2010-01-01
This book is devoted to complexity understanding and management, considered as the main source of efficiency and prosperity for the next decades. Divided into six chapters, the book begins with a presentation of basic concepts as complexity, emergence and closure. The second chapter looks to methods and introduces polystochastic models, the wave equation, possibilities and entropy. The third chapter focusing on physical and chemical systems analyzes flow-sheet synthesis, cyclic operations of separation, drug delivery systems and entropy production. Biomimetic systems represent the main objective of the fourth chapter. Case studies refer to bio-inspired calculation methods, to the role of artificial genetic codes, neural networks and neural codes for evolutionary calculus and for evolvable circuits as biomimetic devices. The fifth chapter, taking its inspiration from systems sciences and cognitive sciences looks to engineering design, case base reasoning methods, failure analysis, and multi-agent manufacturing...
Concomitant variables in finite mixture models
Wedel, M
The standard mixture model, the concomitant variable mixture model, the mixture regression model and the concomitant variable mixture regression model all enable simultaneous identification and description of groups of observations. This study reviews the different ways in which dependencies among
Hybrid models for complex fluids
Tronci, Cesare
2010-01-01
This paper formulates a new approach to complex fluid dynamics, which accounts for microscopic statistical effects in the micromotion. While the ordinary fluid variables (mass density and momentum) undergo usual dynamics, the order parameter field is replaced by a statistical distribution on the order parameter space. This distribution depends also on the point in physical space and its dynamics retains the usual fluid transport features while containing the statistical information on the order parameter space. This approach is based on a hybrid moment closure for Yang-Mills Vlasov plasmas, which replaces the usual cold-plasma assumption. After presenting the basic properties of the hybrid closure, such as momentum map features, singular solutions and Casimir invariants, the effect of Yang-Mills fields is considered and a direct application to ferromagnetic fluids is presented. Hybrid models are also formulated for complex fluids with symmetry breaking. For the special case of liquid crystals, a hybrid formul...
Functions of a complex variable and some of their applications
Fuchs, B A; Sneddon, I N; Ulam, S
1961-01-01
Functions of a Complex Variable and Some of Their Applications, Volume 1, discusses the fundamental ideas of the theory of functions of a complex variable. The book is the result of a complete rewriting and revision of a translation of the second (1957) Russian edition. Numerous changes and additions have been made, both in the text and in the solutions of the Exercises. The book begins with a review of arithmetical operations with complex numbers. Separate chapters discuss the fundamentals of complex analysis; the concept of conformal transformations; the most important of the elementary fun
Zhang, Songchuan; Xia, Youshen
2016-12-28
Much research has been devoted to complex-variable optimization problems due to their engineering applications. However, the complex-valued optimization method for solving complex-variable optimization problems is still an active research area. This paper proposes two efficient complex-valued optimization methods for solving constrained nonlinear optimization problems of real functions in complex variables, respectively. One solves the complex-valued nonlinear programming problem with linear equality constraints. Another solves the complex-valued nonlinear programming problem with both linear equality constraints and an ℓ₁-norm constraint. Theoretically, we prove the global convergence of the proposed two complex-valued optimization algorithms under mild conditions. The proposed two algorithms can solve the complex-valued optimization problem completely in the complex domain and significantly extend existing complex-valued optimization algorithms. Numerical results further show that the proposed two algorithms have a faster speed than several conventional real-valued optimization algorithms.
Modelling Complexity in Musical Rhythm
Liou, Cheng-Yuan; Wu, Tai-Hei; Lee, Chia-Ying
2007-01-01
This paper constructs a tree structure for the music rhythm using the L-system. It models the structure as an automata and derives its complexity. It also solves the complexity for the L-system. This complexity can resolve the similarity between trees. This complexity serves as a measure of psychological complexity for rhythms. It resolves the music complexity of various compositions including the Mozart effect K488. Keyword: music perception, psychological complexity, rhythm, L-system, autom...
Inferring topologies of complex networks with hidden variables.
Wu, Xiaoqun; Wang, Weihan; Zheng, Wei Xing
2012-10-01
Network topology plays a crucial role in determining a network's intrinsic dynamics and function, thus understanding and modeling the topology of a complex network will lead to greater knowledge of its evolutionary mechanisms and to a better understanding of its behaviors. In the past few years, topology identification of complex networks has received increasing interest and wide attention. Many approaches have been developed for this purpose, including synchronization-based identification, information-theoretic methods, and intelligent optimization algorithms. However, inferring interaction patterns from observed dynamical time series is still challenging, especially in the absence of knowledge of nodal dynamics and in the presence of system noise. The purpose of this work is to present a simple and efficient approach to inferring the topologies of such complex networks. The proposed approach is called "piecewise partial Granger causality." It measures the cause-effect connections of nonlinear time series influenced by hidden variables. One commonly used testing network, two regular networks with a few additional links, and small-world networks are used to evaluate the performance and illustrate the influence of network parameters on the proposed approach. Application to experimental data further demonstrates the validity and robustness of our method.
Potential Flows From Three-Dimensional Complex Variables
Martin, E. Dale; Kelly, Patrick H.; Panton, Ronald L.
1992-01-01
Report presents investigation of several functions of three-dimensional complex variable, with emphasis on potential-flow fields computed from these functions. Part of continuing research on generalization of well-established two-dimensional complex analysis to three and more dimensions.
Approximation and polynomial convexity in several complex variables
Ölçücüoğlu, Büke; Olcucuoglu, Buke
2009-01-01
This thesis is a survey on selected topics in approximation theory. The topics use either the techniques from the theory of several complex variables or those that arise in the study of the subject. We also go through elementary theory of polynomially convex sets in complex analysis.
Coevolution of variability models and related software artifacts
Passos, Leonardo; Teixeira, Leopoldo; Dinztner, Nicolas
2015-01-01
models coevolve with other artifact types, we study a large and complex real-world variant-rich software system: the Linux kernel. Specifically, we extract variability-coevolution patterns capturing changes in the variability model of the Linux kernel with subsequent changes in Makefiles and C source...
Numerical implementation of a state variable model for friction
Korzekwa, D.A. [Los Alamos National Lab., NM (United States); Boyce, D.E. [Cornell Univ., Ithaca, NY (United States)
1995-03-01
A general state variable model for friction has been incorporated into a finite element code for viscoplasticity. A contact area evolution model is used in a finite element model of a sheet forming friction test. The results show that a state variable model can be used to capture complex friction behavior in metal forming simulations. It is proposed that simulations can play an important role in the analysis of friction experiments and the development of friction models.
Complexity regularized hydrological model selection
Pande, S.; Arkesteijn, L.; Bastidas, L.A.
2014-01-01
This paper uses a recently proposed measure of hydrological model complexity in a model selection exercise. It demonstrates that a robust hydrological model is selected by penalizing model complexity while maximizing a model performance measure. This especially holds when limited data is available.
Complexity regularized hydrological model selection
Pande, S.; Arkesteijn, L.; Bastidas, L.A.
2014-01-01
This paper uses a recently proposed measure of hydrological model complexity in a model selection exercise. It demonstrates that a robust hydrological model is selected by penalizing model complexity while maximizing a model performance measure. This especially holds when limited data is available.
Complexity Variability Assessment of Nonlinear Time-Varying Cardiovascular Control
Valenza, Gaetano; Citi, Luca; Garcia, Ronald G.; Taylor, Jessica Noggle; Toschi, Nicola; Barbieri, Riccardo
2017-01-01
The application of complex systems theory to physiology and medicine has provided meaningful information about the nonlinear aspects underlying the dynamics of a wide range of biological processes and their disease-related aberrations. However, no studies have investigated whether meaningful information can be extracted by quantifying second-order moments of time-varying cardiovascular complexity. To this extent, we introduce a novel mathematical framework termed complexity variability, in which the variance of instantaneous Lyapunov spectra estimated over time serves as a reference quantifier. We apply the proposed methodology to four exemplary studies involving disorders which stem from cardiology, neurology and psychiatry: Congestive Heart Failure (CHF), Major Depression Disorder (MDD), Parkinson’s Disease (PD), and Post-Traumatic Stress Disorder (PTSD) patients with insomnia under a yoga training regime. We show that complexity assessments derived from simple time-averaging are not able to discern pathology-related changes in autonomic control, and we demonstrate that between-group differences in measures of complexity variability are consistent across pathologies. Pathological states such as CHF, MDD, and PD are associated with an increased complexity variability when compared to healthy controls, whereas wellbeing derived from yoga in PTSD is associated with lower time-variance of complexity. PMID:28218249
Complexity Variability Assessment of Nonlinear Time-Varying Cardiovascular Control
Valenza, Gaetano; Citi, Luca; Garcia, Ronald G.; Taylor, Jessica Noggle; Toschi, Nicola; Barbieri, Riccardo
2017-02-01
The application of complex systems theory to physiology and medicine has provided meaningful information about the nonlinear aspects underlying the dynamics of a wide range of biological processes and their disease-related aberrations. However, no studies have investigated whether meaningful information can be extracted by quantifying second-order moments of time-varying cardiovascular complexity. To this extent, we introduce a novel mathematical framework termed complexity variability, in which the variance of instantaneous Lyapunov spectra estimated over time serves as a reference quantifier. We apply the proposed methodology to four exemplary studies involving disorders which stem from cardiology, neurology and psychiatry: Congestive Heart Failure (CHF), Major Depression Disorder (MDD), Parkinson’s Disease (PD), and Post-Traumatic Stress Disorder (PTSD) patients with insomnia under a yoga training regime. We show that complexity assessments derived from simple time-averaging are not able to discern pathology-related changes in autonomic control, and we demonstrate that between-group differences in measures of complexity variability are consistent across pathologies. Pathological states such as CHF, MDD, and PD are associated with an increased complexity variability when compared to healthy controls, whereas wellbeing derived from yoga in PTSD is associated with lower time-variance of complexity.
Variable agreement among experts regarding Mycobacterium avium complex lung disease.
Marras, Theodore K; Prevots, D Rebecca; Jamieson, Frances B; Winthrop, Kevin L
2015-02-01
Data regarding many clinical aspects of pulmonary Mycobacterium avium complex (pMAC) are lacking. Guidelines rely substantially upon expert opinion, integrated through face-to-face meetings, variably weighting individual opinions. We surveyed North American non-tuberculous mycobacteria experts regarding clinical aspects of pMAC using Delphi methods. Nineteen of 26 invited experts (73%) responded, with extensive variability. Convergence could not be reached for most questions. Respondents described extensive uncertainty around specific issues. Findings underscore urgent need for more research.
Modeling Interconnect Variability Using Efficient Parametric Model Order Reduction
Li, Peng; Li, Xin; Pileggi, Lawrence T; Nassif, Sani R
2011-01-01
Assessing IC manufacturing process fluctuations and their impacts on IC interconnect performance has become unavoidable for modern DSM designs. However, the construction of parametric interconnect models is often hampered by the rapid increase in computational cost and model complexity. In this paper we present an efficient yet accurate parametric model order reduction algorithm for addressing the variability of IC interconnect performance. The efficiency of the approach lies in a novel combination of low-rank matrix approximation and multi-parameter moment matching. The complexity of the proposed parametric model order reduction is as low as that of a standard Krylov subspace method when applied to a nominal system. Under the projection-based framework, our algorithm also preserves the passivity of the resulting parametric models.
Rainfall variability modelling in Rwanda
Nduwayezu, E.; Kanevski, M.; Jaboyedoff, M.
2012-04-01
Support to climate change adaptation is a priority in many International Organisations meetings. But is the international approach for adaptation appropriate with field reality in developing countries? In Rwanda, the main problems will be heavy rain and/or long dry season. Four rainfall seasons have been identified, corresponding to the four thermal Earth ones in the south hemisphere: the normal season (summer), the rainy season (autumn), the dry season (winter) and the normo-rainy season (spring). The spatial rainfall decreasing from West to East, especially in October (spring) and February (summer) suggests an «Atlantic monsoon influence» while the homogeneous spatial rainfall distribution suggests an «Inter-tropical front » mechanism. The torrential rainfall that occurs every year in Rwanda disturbs the circulation for many days, damages the houses and, more seriously, causes heavy losses of people. All districts are affected by bad weather (heavy rain) but the costs of such events are the highest in mountains districts. The objective of the current research is to proceed to an evaluation of the potential rainfall risk by applying advanced geospatial modelling tools in Rwanda: geostatistical predictions and simulations, machine learning algorithm (different types of neural networks) and GIS. The research will include rainfalls variability mapping and probabilistic analyses of extreme events.
Toeplitz Operators with BMO Symbols of Several Complex Variables
Zhong Hua HE; Guang Fu CAO
2012-01-01
In this note we prove that the boundedness and compactness of the Toeplitz operator on the Bergman space L2a (Bn) for several complex variables with a BMO1 symbol is completely determined by the boundary behavior of its Berezin transform.
Genetic variability for tuber yield, quality, and virus disease complex ...
Genetic variability for tuber yield, quality, and virus disease complex traits in Uganda ... Silk and Sowola which showed high flowering ability failed to fertilise and set ... Up to five genes may be involved in â-carotene synthesis and probably in ...
Akdim, Mohamed Reda
2003-09-01
Nowadays plasmas are used for various applications such as the fabrication of silicon solar cells, integrated circuits, coatings and dental cleaning. In the case of a processing plasma, e.g. for the fabrication of amorphous silicon solar cells, a mixture of silane and hydrogen gas is injected in a reactor. These gases are decomposed by making a plasma. A plasma with a low degree of ionization (typically 10_5) is usually made in a reactor containing two electrodes driven by a radio-frequency (RF) power source in the megahertz range. Under the right circumstances the radicals, neutrals and ions can react further to produce nanometer sized dust particles. The particles can stick to the surface and thereby contribute to a higher deposition rate. Another possibility is that the nanometer sized particles coagulate and form larger micron sized particles. These particles obtain a high negative charge, due to their large radius and are usually trapped in a radiofrequency plasma. The electric field present in the discharge sheaths causes the entrapment. Such plasmas are called dusty or complex plasmas. In this thesis numerical models are presented which describe dusty plasmas in reactive and nonreactive plasmas. We started first with the development of a simple one-dimensional silane fluid model where a dusty radio-frequency silane/hydrogen discharge is simulated. In the model, discharge quantities like the fluxes, densities and electric field are calculated self-consistently. A radius and an initial density profile for the spherical dust particles are given and the charge and the density of the dust are calculated with an iterative method. During the transport of the dust, its charge is kept constant in time. The dust influences the electric field distribution through its charge and the density of the plasma through recombination of positive ions and electrons at its surface. In the model this process gives an extra production of silane radicals, since the growth of dust is
Linear latent variable models: the lava-package
Holst, Klaus Kähler; Budtz-Jørgensen, Esben
2013-01-01
An R package for specifying and estimating linear latent variable models is presented. The philosophy of the implementation is to separate the model specification from the actual data, which leads to a dynamic and easy way of modeling complex hierarchical structures. Several advanced features...... are implemented including robust standard errors for clustered correlated data, multigroup analyses, non-linear parameter constraints, inference with incomplete data, maximum likelihood estimation with censored and binary observations, and instrumental variable estimators. In addition an extensive simulation...
Variable-Complexity Multidisciplinary Optimization on Parallel Computers
Grossman, Bernard; Mason, William H.; Watson, Layne T.; Haftka, Raphael T.
1998-01-01
This report covers work conducted under grant NAG1-1562 for the NASA High Performance Computing and Communications Program (HPCCP) from December 7, 1993, to December 31, 1997. The objective of the research was to develop new multidisciplinary design optimization (MDO) techniques which exploit parallel computing to reduce the computational burden of aircraft MDO. The design of the High-Speed Civil Transport (HSCT) air-craft was selected as a test case to demonstrate the utility of our MDO methods. The three major tasks of this research grant included: development of parallel multipoint approximation methods for the aerodynamic design of the HSCT, use of parallel multipoint approximation methods for structural optimization of the HSCT, mathematical and algorithmic development including support in the integration of parallel computation for items (1) and (2). These tasks have been accomplished with the development of a response surface methodology that incorporates multi-fidelity models. For the aerodynamic design we were able to optimize with up to 20 design variables using hundreds of expensive Euler analyses together with thousands of inexpensive linear theory simulations. We have thereby demonstrated the application of CFD to a large aerodynamic design problem. For the predicting structural weight we were able to combine hundreds of structural optimizations of refined finite element models with thousands of optimizations based on coarse models. Computations have been carried out on the Intel Paragon with up to 128 nodes. The parallel computation allowed us to perform combined aerodynamic-structural optimization using state of the art models of a complex aircraft configurations.
Limited dependent variable models for panel data
Charlier, E.
1997-01-01
Many economic phenomena require limited variable models for an appropriate treatment. In addition, panel data models allow the inclusion of unobserved individual-specific effects. These models are combined in this thesis. Distributional assumptions in the limited dependent variable models are
BEYOND SEM: GENERAL LATENT VARIABLE MODELING
Muthén, Bengt O
2002-01-01
This article gives an overview of statistical analysis with latent variables. Using traditional structural equation modeling as a starting point, it shows how the idea of latent variables captures a wide variety of statistical concepts...
Zhang, Songchuan; Xia, Youshen; Wang, Jun
2015-12-01
In this paper, we present a complex-valued projection neural network for solving constrained convex optimization problems of real functions with complex variables, as an extension of real-valued projection neural networks. Theoretically, by developing results on complex-valued optimization techniques, we prove that the complex-valued projection neural network is globally stable and convergent to the optimal solution. Obtained results are completely established in the complex domain and thus significantly generalize existing results of the real-valued projection neural networks. Numerical simulations are presented to confirm the obtained results and effectiveness of the proposed complex-valued projection neural network.
Coevolution of variability models and related software artifacts
Passos, Leonardo; Teixeira, Leopoldo; Dinztner, Nicolas;
2015-01-01
to the evolution of different kinds of software artifacts, it is not surprising that industry reports existing tools and solutions ineffective, as they do not handle the complexity found in practice. Attempting to mitigate this overall lack of knowledge and to support tool builders with insights on how variability...... models coevolve with other artifact types, we study a large and complex real-world variant-rich software system: the Linux kernel. Specifically, we extract variability-coevolution patterns capturing changes in the variability model of the Linux kernel with subsequent changes in Makefiles and C source......Variant-rich software systems offer a large degree of customization, allowing users to configure the target system according to their preferences and needs. Facing high degrees of variability, these systems often employ variability models to explicitly capture user-configurable features (e...
Cardinality-dependent Variability in Orthogonal Variability Models
Mærsk-Møller, Hans Martin; Jørgensen, Bo Nørregaard
2012-01-01
During our work on developing and running a software product line for eco-sustainable greenhouse-production software tools, which currently have three products members we have identified a need for extending the notation of the Orthogonal Variability Model (OVM) to support what we refer to as car......During our work on developing and running a software product line for eco-sustainable greenhouse-production software tools, which currently have three products members we have identified a need for extending the notation of the Orthogonal Variability Model (OVM) to support what we refer...
Variable Fidelity Aeroelastic Toolkit - Structural Model Project
National Aeronautics and Space Administration — The proposed innovation is a methodology to incorporate variable fidelity structural models into steady and unsteady aeroelastic and aeroservoelastic analyses in...
Appropriate complexity landscape modeling
Larsen, Laurel G.; Eppinga, Maarten B.|info:eu-repo/dai/nl/304834971; Passalacqua, Paola; Getz, Wayne M.; Rose, Kenneth A.; Liang, Man
2016-01-01
Advances in computing technology, new and ongoing restoration initiatives, concerns about climate change's effects, and the increasing interdisciplinarity of research have encouraged the development of landscape-scale mechanistic models of coupled ecological-geophysical systems. However, communicati
Mitochondrial genome variability within the Candida parapsilosis species complex.
Valach, Matus; Pryszcz, Leszek P; Tomaska, Lubomir; Gacser, Attila; Gabaldón, Toni; Nosek, Jozef
2012-09-01
Candida parapsilosis species complex includes three closely related species, namely C. parapsilosis (sensu stricto), C. orthopsilosis, and C. metapsilosis. Unlike most other yeast lineages, members of this species complex possess a linear mitochondrial genome. Yet, its circularized mutant form was identified in strains of C. orthopsilosis and C. metapsilosis. To investigate the underlying variability, we performed comparative analyses of the complete mitochondrial DNA sequences in a collection of strains. Our results demonstrate that in contrast to C. parapsilosis and C. metapsilosis, C. orthopsilosis exhibits remarkably high nucleotide diversity whose pattern is consistent with intraspecific genetic exchange.
New synchronization analysis for complex networks with variable delays
Zhang Hua-Guang; Gong Da-Wei; Wang Zhan-Shan
2011-01-01
This paper deals with the issue of synchronization of delayed complex networks. Differing from previous results, the delay interval[0, d(t)]is divided into some variable subintervals by employing a new method of weighting delays. Thus,new synchronization criteria for complex networks with time-varying delays are derived by applying this weighting-delay method and introducing some free weighting matrices. The obtained results have proved to be less conservative than previous results. The sufficient conditions of asymptotical synchronization are derived in the form of linear matrix inequality, which are easy to verify. Finally, several simulation examples axe provided to show the effectiveness of the proposed results.
New complex variable meshless method for advection-diffusion problems
Wang Jian-Fei; Cheng Yu-Min
2013-01-01
In this paper,an improved complex variable meshless method (ICVMM) for two-dimensional advection-diffusion problems is developed based on improved complex variable moving least-square (ICVMLS) approximation.The equivalent functional of two-dimensional advection-diffusion problems is formed,the variation method is used to obtain the equation system,and the penalty method is employed to impose the essential boundary conditions.The difference method for two-point boundary value problems is used to obtain the discrete equations.Then the corresponding formulas of the ICVMM for advection-diffusion problems are presented.Two numerical examples with different node distributions are used to validate and investigate the accuracy and efficiency of the new method in this paper.It is shown that ICVMM is very effective for advection-diffusion problems,and has a good convergent character,accuracy,and computational efficiency.
Variable structure control of complex systems analysis and design
Yan, Xing-Gang; Edwards, Christopher
2017-01-01
This book systematizes recent research work on variable-structure control. It is self-contained, presenting necessary mathematical preliminaries so that the theoretical developments can be easily understood by a broad readership. The text begins with an introduction to the fundamental ideas of variable-structure control pertinent to their application in complex nonlinear systems. In the core of the book, the authors lay out an approach, suitable for a large class of systems, that deals with system uncertainties with nonlinear bounds. Its treatment of complex systems in which limited measurement information is available makes the results developed convenient to implement. Various case-study applications are described, from aerospace, through power systems to river pollution control with supporting simulations to aid the transition from mathematical theory to engineering practicalities. The book addresses systems with nonlinearities, time delays and interconnections and considers issues such as stabilization, o...
Complex variable element-free Galerkin method for viscoelasticity problems
Cheng Yu-Min; Li Rong-Xin; Peng Miao-Juan
2012-01-01
Based on the complex variable moving least-square (CVMLS) approximation,the complex variable element-free Galerkin (CVEFG) method for two-dimensional viscoelasticity problems under the creep condition is presented in this paper.The Galerkin weak form is employed to obtain the equation system,and the penalty method is used to apply the essential boundary conditions,then the corresponding formulae of the CVEFG method for two-dimensional viscoelasticity problems under the creep condition are obtained. Compared with the element-free Galerkin (EFG) method,with the same node distribution,the CVEFG method has higher precision,and to obtain the similar precision,the CVEFG method has greater computational efficiency. Some numerical examples are given to demonstrate the validity and the efficiency of the method.
Handbook of latent variable and related models
Lee, Sik-Yum
2011-01-01
This Handbook covers latent variable models, which are a flexible class of models for modeling multivariate data to explore relationships among observed and latent variables.- Covers a wide class of important models- Models and statistical methods described provide tools for analyzing a wide spectrum of complicated data- Includes illustrative examples with real data sets from business, education, medicine, public health and sociology.- Demonstrates the use of a wide variety of statistical, computational, and mathematical techniques.
Linear latent variable models: the lava-package
Holst, Klaus Kähler; Budtz-Jørgensen, Esben
2013-01-01
An R package for specifying and estimating linear latent variable models is presented. The philosophy of the implementation is to separate the model specification from the actual data, which leads to a dynamic and easy way of modeling complex hierarchical structures. Several advanced features are...... interface covering a broad range of non-linear generalized structural equation models is described. The model and software are demonstrated in data of measurements of the serotonin transporter in the human brain....
A Core Language for Separate Variability Modeling
Iosif-Lazăr, Alexandru Florin; Wasowski, Andrzej; Schaefer, Ina
2014-01-01
Separate variability modeling adds variability to a modeling language without requiring modifications of the language or the supporting tools. We define a core language for separate variability modeling using a single kind of variation point to define transformations of software artifacts in object...... hierarchical dependencies between variation points via copying and flattening. Thus, we reduce a model with intricate dependencies to a flat executable model transformation consisting of simple unconditional local variation points. The core semantics is extremely concise: it boils down to two operational rules...
Complexation of Plutonium (IV) With Sulfate At Variable Temperatures
Y. Xia; J.I. Friese; D.A> Moore; P.P. Bachelor; L. Rao
2006-10-05
The complexation of plutonium(IV) with sulfate at variable temperatures has been investigated by solvent extraction method. A NaBrO{sub 3} solution was used as holding oxidant to maintain the plutonium(IV) oxidation state throughout the experiments. The distribution ratio of Pu(IV) between the organic and aqueous phases was found to decrease as the concentrations of sulfate were increased. Stability constants of the 1:1 and 1:2 Pu(IV)-HSO{sub 4}{sup -} complexes, dominant in the aqueous phase, were calculated from the effect of [HSO{sub 4}{sup -}] on the distribution ratio. The enthalpy and entropy of complexation were calculated from the stability constants at different temperatures using the Van't Hoff equation.
The Possibility Using the Power Production Function of Complex Variable for Economic Forecasting
Sergey Gennadyevich Svetunkov
2016-09-01
Full Text Available The possibility of dynamic analysis and forecasting production results using the power production functions of complex variables with real coefficients is considered. This model expands the arsenal of instrumental methods and allows multivariate production forecasts which are unattainable by other methods of real variables as the functions of complex variables simulate the production differently in comparison with the models of real variables. The values of coefficients of the power production functions of complex variables can be calculated for each statistical observation. This allows to consider the change of the coefficients over time, to analyze this trend and predict the values of the coefficients for a given term, thereby to predict the form of the production function, which forecasts the operating results. Thus, the model of the production function with variable coefficients is introduced into the scientific circulation. With this model, the inverse problem of forecasting might be solved, such as the determination of the necessary quantities of labor and capital to achieve the desired operational results. The study is based on the principles of the modern methodology of complex-valued economy, one of its sections is the complex-valued patterns of production functions. In the article, the possibility of economic forecasting is tested on the example of the UK economy. The results of this prediction are compared with the forecasts obtained by other methods, which have led to the conclusion about the effectiveness of the proposed approach and the method of forecasting at the macro levels of production systems. A complex-valued power model of the production function is recommended for the multivariate prediction of sustainable production systems — the global economy, the economies of individual countries, major industries and regions.
Experimental falsification of Leggett's nonlocal variable model.
Branciard, Cyril; Ling, Alexander; Gisin, Nicolas; Kurtsiefer, Christian; Lamas-Linares, Antia; Scarani, Valerio
2007-11-23
Bell's theorem guarantees that no model based on local variables can reproduce quantum correlations. Also, some models based on nonlocal variables, if subject to apparently "reasonable" constraints, may fail to reproduce quantum physics. In this Letter, we introduce a family of inequalities, which use a finite number of measurement settings, and which therefore allow testing Leggett's nonlocal model versus quantum physics. Our experimental data falsify Leggett's model and are in agreement with quantum predictions.
Decision variables analysis for structured modeling
潘启树; 赫东波; 张洁; 胡运权
2002-01-01
Structured modeling is the most commonly used modeling method, but it is not quite addaptive to significant changes in environmental conditions. Therefore, Decision Variables Analysis(DVA), a new modelling method is proposed to deal with linear programming modeling and changing environments. In variant linear programming , the most complicated relationships are those among decision variables. DVA classifies the decision variables into different levels using different index sets, and divides a model into different elements so that any change can only have its effect on part of the whole model. DVA takes into consideration the complicated relationships among decision variables at different levels, and can therefore sucessfully solve any modeling problem in dramatically changing environments.
Generalized latent variable modeling multilevel, longitudinal, and structural equation models
Skrondal, Anders
2004-01-01
METHODOLOGY THE OMNI-PRESENCE OF LATENT VARIABLES Introduction 'True' variable measured with error Hypothetical constructs Unobserved heterogeneity Missing values and counterfactuals Latent responses Generating flexible distributions Combining information Summary MODELING DIFFERENT RESPONSE PROCESSES Introduction Generalized linear models Extensions of generalized linear models Latent response formulation Modeling durations or survival Summary and further reading CLASSICAL LATENT VARIABLE MODELS Introduction Multilevel regression models Factor models and item respons
Modelling Canopy Flows over Complex Terrain
Grant, Eleanor R.; Ross, Andrew N.; Gardiner, Barry A.
2016-06-01
Recent studies of flow over forested hills have been motivated by a number of important applications including understanding CO_2 and other gaseous fluxes over forests in complex terrain, predicting wind damage to trees, and modelling wind energy potential at forested sites. Current modelling studies have focussed almost exclusively on highly idealized, and usually fully forested, hills. Here, we present model results for a site on the Isle of Arran, Scotland with complex terrain and heterogeneous forest canopy. The model uses an explicit representation of the canopy and a 1.5-order turbulence closure for flow within and above the canopy. The validity of the closure scheme is assessed using turbulence data from a field experiment before comparing predictions of the full model with field observations. For near-neutral stability, the results compare well with the observations, showing that such a relatively simple canopy model can accurately reproduce the flow patterns observed over complex terrain and realistic, variable forest cover, while at the same time remaining computationally feasible for real case studies. The model allows closer examination of the flow separation observed over complex forested terrain. Comparisons with model simulations using a roughness length parametrization show significant differences, particularly with respect to flow separation, highlighting the need to explicitly model the forest canopy if detailed predictions of near-surface flow around forests are required.
Complex system modelling for veterinary epidemiology.
Lanzas, Cristina; Chen, Shi
2015-02-01
The use of mathematical models has a long tradition in infectious disease epidemiology. The nonlinear dynamics and complexity of pathogen transmission pose challenges in understanding its key determinants, in identifying critical points, and designing effective mitigation strategies. Mathematical modelling provides tools to explicitly represent the variability, interconnectedness, and complexity of systems, and has contributed to numerous insights and theoretical advances in disease transmission, as well as to changes in public policy, health practice, and management. In recent years, our modelling toolbox has considerably expanded due to the advancements in computing power and the need to model novel data generated by technologies such as proximity loggers and global positioning systems. In this review, we discuss the principles, advantages, and challenges associated with the most recent modelling approaches used in systems science, the interdisciplinary study of complex systems, including agent-based, network and compartmental modelling. Agent-based modelling is a powerful simulation technique that considers the individual behaviours of system components by defining a set of rules that govern how individuals ("agents") within given populations interact with one another and the environment. Agent-based models have become a recent popular choice in epidemiology to model hierarchical systems and address complex spatio-temporal dynamics because of their ability to integrate multiple scales and datasets.
Modelling Canopy Flows over Complex Terrain
Grant, Eleanor R.; Ross, Andrew N.; Gardiner, Barry A.
2016-12-01
Recent studies of flow over forested hills have been motivated by a number of important applications including understanding CO_2 and other gaseous fluxes over forests in complex terrain, predicting wind damage to trees, and modelling wind energy potential at forested sites. Current modelling studies have focussed almost exclusively on highly idealized, and usually fully forested, hills. Here, we present model results for a site on the Isle of Arran, Scotland with complex terrain and heterogeneous forest canopy. The model uses an explicit representation of the canopy and a 1.5-order turbulence closure for flow within and above the canopy. The validity of the closure scheme is assessed using turbulence data from a field experiment before comparing predictions of the full model with field observations. For near-neutral stability, the results compare well with the observations, showing that such a relatively simple canopy model can accurately reproduce the flow patterns observed over complex terrain and realistic, variable forest cover, while at the same time remaining computationally feasible for real case studies. The model allows closer examination of the flow separation observed over complex forested terrain. Comparisons with model simulations using a roughness length parametrization show significant differences, particularly with respect to flow separation, highlighting the need to explicitly model the forest canopy if detailed predictions of near-surface flow around forests are required.
Prediction models in complex terrain
Marti, I.; Nielsen, Torben Skov; Madsen, Henrik
2001-01-01
The objective of the work is to investigatethe performance of HIRLAM in complex terrain when used as input to energy production forecasting models, and to develop a statistical model to adapt HIRLAM prediction to the wind farm. The features of the terrain, specially the topography, influence...
Emergence of dynamical complexity related to human heart rate variability
Chang, Mei-Chu; Peng, C.-K.; Stanley, H. Eugene
2014-12-01
We apply the refined composite multiscale entropy (MSE) method to a one-dimensional directed small-world network composed of nodes whose states are binary and whose dynamics obey the majority rule. We find that the resulting fluctuating signal becomes dynamically complex. This dynamical complexity is caused (i) by the presence of both short-range connections and long-range shortcuts and (ii) by how well the system can adapt to the noisy environment. By tuning the adaptability of the environment and the long-range shortcuts we can increase or decrease the dynamical complexity, thereby modeling trends found in the MSE of a healthy human heart rate in different physiological states. When the shortcut and adaptability values increase, the complexity in the system dynamics becomes uncorrelated.
Action-angle variables for complex projective space and semiclassical exactness
Oh, P; Phillial Oh; Myung-Ho Kim
1994-01-01
We construct the action-angle variables of a classical integrable model defined on complex projective phase space and calculate the quantum mechanical propagator in the coherent state path integral representation using the stationary phase approximation. We show that the resulting expression for the propagator coincides with the exact propagator which was obtained by solving the time-dependent Schr\\"odinger equation.
Pluralistic Modeling of Complex Systems
Helbing, Dirk
2010-01-01
The modeling of complex systems such as ecological or socio-economic systems can be very challenging. Although various modeling approaches exist, they are generally not compatible and mutually consistent, and empirical data often do not allow one to decide what model is the right one, the best one, or most appropriate one. Moreover, as the recent financial and economic crisis shows, relying on a single, idealized model can be very costly. This contribution tries to shed new light on problems that arise when complex systems are modeled. While the arguments can be transferred to many different systems, the related scientific challenges are illustrated for social, economic, and traffic systems. The contribution discusses issues that are sometimes overlooked and tries to overcome some frequent misunderstandings and controversies of the past. At the same time, it is highlighted how some long-standing scientific puzzles may be solved by considering non-linear models of heterogeneous agents with spatio-temporal inte...
Complex state variable- and disturbance observer-based current controllers for AC drives
Dal, Mehmet; Teodorescu, Remus; Blaabjerg, Frede
2013-01-01
, extracted by a disturbance observer and then injected into the current controller. In this study, a revised version of a disturbance observer-based controller and a well known complex variable model-based design with a single set of complex pole are compared in terms of design aspects and performance...... of the parameter and the cross-coupling effect. Moreover, it provides a better performance, smooth and low noisy operation with respect to the complex variable controller....... of the stator current. In order to improve the current control performance an alternative current control strategy was proposed previously aiming to avoid the undesired cross-coupling and non-linearities between the state variables. These effects are assumed as disturbances arisen in the closed-loop path...
Fractal and complexity measures of heart rate variability.
Perkiömäki, Juha S; Mäkikallio, Timo H; Huikuri, Heikki V
2005-01-01
Heart rate variability has been analyzed conventionally with time and frequency domain methods, which measure the overall magnitude of RR interval fluctuations around its mean value or the magnitude of fluctuations in some predetermined frequencies. Analysis of heart rate dynamics by methods based on chaos theory and nonlinear system theory has gained recent interest. This interest is based on observations suggesting that the mechanisms involved in cardiovascular regulation likely interact with each other in a nonlinear way. Furthermore, recent observational studies suggest that some indexes describing nonlinear heart rate dynamics, such as fractal scaling exponents, may provide more powerful prognostic information than the traditional heart rate variability indexes. In particular, the short-term fractal scaling exponent measured by the detrended fluctuation analysis method has predicted fatal cardiovascular events in various populations. Approximate entropy, a nonlinear index of heart rate dynamics, that describes the complexity of RR interval behavior, has provided information on the vulnerability to atrial fibrillation. Many other nonlinear indexes, e.g., Lyapunov exponent and correlation dimensions, also give information on the characteristics of heart rate dynamics, but their clinical utility is not well established. Although concepts of chaos theory, fractal mathematics, and complexity measures of heart rate behavior in relation to cardiovascular physiology or various cardiovascular events are still far away from clinical medicine, they are a fruitful area for future research to expand our knowledge concerning the behavior of cardiovascular oscillations in normal healthy conditions as well as in disease states.
Learning dynamics of a single polar variable complex-valued neuron.
Nitta, Tohru
2015-05-01
This letter investigates the characteristics of the complex-valued neuron model with parameters represented by polar coordinates (called polar variable complex-valued neuron). The parameters of the polar variable complex-valued neuron are unidentifiable. The plateau phenomenon can occur during learning of the polar variable complex-valued neuron. Furthermore, computer simulations suggest that a single polar variable complex-valued neuron has the following characteristics in the case of using the steepest gradient-descent method with square error: (1) unidentifiable parameters (singular points) degrade the learning speed and (2) a plateau can occur during learning. When the weight is attracted to the singular point, the learning tends to become stuck. However, computer simulations also show that the steepest gradient-descent method with amplitude-phase error and the complex-valued natural gradient method could reduce the effects of the singular points. The learning dynamics near singular points depends on the error functions and the training algorithms used.
Random Effect and Latent Variable Model Selection
Dunson, David B
2008-01-01
Presents various methods for accommodating model uncertainty in random effects and latent variable models. This book focuses on frequentist likelihood ratio and score tests for zero variance components. It also focuses on Bayesian methods for random effects selection in linear mixed effects and generalized linear mixed models
Sampling Weights in Latent Variable Modeling
Asparouhov, Tihomir
2005-01-01
This article reviews several basic statistical tools needed for modeling data with sampling weights that are implemented in Mplus Version 3. These tools are illustrated in simulation studies for several latent variable models including factor analysis with continuous and categorical indicators, latent class analysis, and growth models. The…
A Model for Positively Correlated Count Variables
Møller, Jesper; Rubak, Ege Holger
2010-01-01
An α-permanental random field is briefly speaking a model for a collection of non-negative integer valued random variables with positive associations. Though such models possess many appealing probabilistic properties, many statisticians seem unaware of α-permanental random fields and their poten......An α-permanental random field is briefly speaking a model for a collection of non-negative integer valued random variables with positive associations. Though such models possess many appealing probabilistic properties, many statisticians seem unaware of α-permanental random fields...
Model and Variable Selection Procedures for Semiparametric Time Series Regression
Risa Kato
2009-01-01
Full Text Available Semiparametric regression models are very useful for time series analysis. They facilitate the detection of features resulting from external interventions. The complexity of semiparametric models poses new challenges for issues of nonparametric and parametric inference and model selection that frequently arise from time series data analysis. In this paper, we propose penalized least squares estimators which can simultaneously select significant variables and estimate unknown parameters. An innovative class of variable selection procedure is proposed to select significant variables and basis functions in a semiparametric model. The asymptotic normality of the resulting estimators is established. Information criteria for model selection are also proposed. We illustrate the effectiveness of the proposed procedures with numerical simulations.
Effects of head-down bed rest on complex heart rate variability: Response to LBNP testing
Goldberger, Ary L.; Mietus, Joseph E.; Rigney, David R.; Wood, Margie L.; Fortney, Suzanne M.
1994-01-01
Head-down bed rest is used to model physiological changes during spaceflight. We postulated that bed rest would decrease the degree of complex physiological heart rate variability. We analyzed continuous heart rate data from digitized Holter recordings in eight healthy female volunteers (age 28-34 yr) who underwent a 13-day 6 deg head-down bed rest study with serial lower body negative pressure (LBNP) trials. Heart rate variability was measured on a 4-min data sets using conventional time and frequency domain measures as well as with a new measure of signal 'complexity' (approximate entropy). Data were obtained pre-bed rest (control), during bed rest (day 4 and day 9 or 11), and 2 days post-bed rest (recovery). Tolerance to LBNP was significantly reduced on both bed rest days vs. pre-bed rest. Heart rate variability was assessed at peak LBNP. Heart rate approximate entropy was significantly decreased at day 4 and day 9 or 11, returning toward normal during recovery. Heart rate standard deviation and the ratio of high- to low-power frequency did not change significantly. We conclude that short-term bed rest is associated with a decrease in the complex variability of heart rate during LBNP testing in healthy young adult women. Measurement of heart rate complexity, using a method derived from nonlinear dynamics ('chaos theory'), may provide a sensitive marker of this loss of physiological variability, complementing conventional time and frequency domain statistical measures.
Integrating models that depend on variable data
Banks, A. T.; Hill, M. C.
2016-12-01
Models of human-Earth systems are often developed with the goal of predicting the behavior of one or more dependent variables from multiple independent variables, processes, and parameters. Often dependent variable values range over many orders of magnitude, which complicates evaluation of the fit of the dependent variable values to observations. Many metrics and optimization methods have been proposed to address dependent variable variability, with little consensus being achieved. In this work, we evaluate two such methods: log transformation (based on the dependent variable being log-normally distributed with a constant variance) and error-based weighting (based on a multi-normal distribution with variances that tend to increase as the dependent variable value increases). Error-based weighting has the advantage of encouraging model users to carefully consider data errors, such as measurement and epistemic errors, while log-transformations can be a black box for typical users. Placing the log-transformation into the statistical perspective of error-based weighting has not formerly been considered, to the best of our knowledge. To make the evaluation as clear and reproducible as possible, we use multiple linear regression (MLR). Simulations are conducted with MatLab. The example represents stream transport of nitrogen with up to eight independent variables. The single dependent variable in our example has values that range over 4 orders of magnitude. Results are applicable to any problem for which individual or multiple data types produce a large range of dependent variable values. For this problem, the log transformation produced good model fit, while some formulations of error-based weighting worked poorly. Results support previous suggestions fthat error-based weighting derived from a constant coefficient of variation overemphasizes low values and degrades model fit to high values. Applying larger weights to the high values is inconsistent with the log
Magnetization patterns in ferromagnetic nanoelements as functions of complex variable.
Metlov, Konstantin L
2010-09-03
The assumption of a certain hierarchy of soft ferromagnet energy terms, realized in small enough flat nanoelements, allows us to obtain explicit expressions for their magnetization distributions. By minimizing the energy terms sequentially, from the most to the least important, magnetization distributions are expressed as solutions of the Riemann-Hilbert boundary value problem for a function of complex variable. A number of free parameters, corresponding to positions of vortices and antivortices, still remain in the expression. Thus, the presented approach is a factory of realistic Ritz functions for analytical (or numerical) micromagnetic calculations. Examples are given for multivortex magnetization distributions in a circular cylinder, and for two-dimensional domain walls in thin magnetic strips.
Computational models of complex systems
Dabbaghian, Vahid
2014-01-01
Computational and mathematical models provide us with the opportunities to investigate the complexities of real world problems. They allow us to apply our best analytical methods to define problems in a clearly mathematical manner and exhaustively test our solutions before committing expensive resources. This is made possible by assuming parameter(s) in a bounded environment, allowing for controllable experimentation, not always possible in live scenarios. For example, simulation of computational models allows the testing of theories in a manner that is both fundamentally deductive and experimental in nature. The main ingredients for such research ideas come from multiple disciplines and the importance of interdisciplinary research is well recognized by the scientific community. This book provides a window to the novel endeavours of the research communities to present their works by highlighting the value of computational modelling as a research tool when investigating complex systems. We hope that the reader...
Prediction models in complex terrain
Marti, I.; Nielsen, Torben Skov; Madsen, Henrik
2001-01-01
are calculated using on-line measurements of power production as well as HIRLAM predictions as input thus taking advantage of the auto-correlation, which is present in the power production for shorter pediction horizons. Statistical models are used to discribe the relationship between observed energy production......The objective of the work is to investigatethe performance of HIRLAM in complex terrain when used as input to energy production forecasting models, and to develop a statistical model to adapt HIRLAM prediction to the wind farm. The features of the terrain, specially the topography, influence...... and HIRLAM predictions. The statistical models belong to the class of conditional parametric models. The models are estimated using local polynomial regression, but the estimation method is here extended to be adaptive in order to allow for slow changes in the system e.g. caused by the annual variations...
Fourier Method for an Over-Determined Elliptic System with Several Complex Variables
Dao Qing DAI
2006-01-01
Two boundary value problems are investigated for an over-determined elliptic system with several complex variables in polydisc. Necessary and sufficient conditions for the existence of finitely many linearly independent solutions and finitely many solvability conditions are derived. Moreover,the boundary value problem for any number of complex variables is treated in a unified way and the essential difference between the case of one complex variable and that of several complex variables is revealed.
Complex Networks in Psychological Models
Wedemann, R. S.; Carvalho, L. S. A. V. D.; Donangelo, R.
We develop schematic, self-organizing, neural-network models to describe mechanisms associated with mental processes, by a neurocomputational substrate. These models are examples of real world complex networks with interesting general topological structures. Considering dopaminergic signal-to-noise neuronal modulation in the central nervous system, we propose neural network models to explain development of cortical map structure and dynamics of memory access, and unify different mental processes into a single neurocomputational substrate. Based on our neural network models, neurotic behavior may be understood as an associative memory process in the brain, and the linguistic, symbolic associative process involved in psychoanalytic working-through can be mapped onto a corresponding process of reconfiguration of the neural network. The models are illustrated through computer simulations, where we varied dopaminergic modulation and observed the self-organizing emergent patterns at the resulting semantic map, interpreting them as different manifestations of mental functioning, from psychotic through to normal and neurotic behavior, and creativity.
Nonparametric Bayesian Modeling of Complex Networks
Schmidt, Mikkel Nørgaard; Mørup, Morten
2013-01-01
Modeling structure in complex networks using Bayesian nonparametrics makes it possible to specify flexible model structures and infer the adequate model complexity from the observed data. This article provides a gentle introduction to nonparametric Bayesian modeling of complex networks: Using...... for complex networks can be derived and point out relevant literature....
Gaussian mixture model of heart rate variability.
Tommaso Costa
Full Text Available Heart rate variability (HRV is an important measure of sympathetic and parasympathetic functions of the autonomic nervous system and a key indicator of cardiovascular condition. This paper proposes a novel method to investigate HRV, namely by modelling it as a linear combination of Gaussians. Results show that three Gaussians are enough to describe the stationary statistics of heart variability and to provide a straightforward interpretation of the HRV power spectrum. Comparisons have been made also with synthetic data generated from different physiologically based models showing the plausibility of the Gaussian mixture parameters.
Bayesian variable selection for latent class models.
Ghosh, Joyee; Herring, Amy H; Siega-Riz, Anna Maria
2011-09-01
In this article, we develop a latent class model with class probabilities that depend on subject-specific covariates. One of our major goals is to identify important predictors of latent classes. We consider methodology that allows estimation of latent classes while allowing for variable selection uncertainty. We propose a Bayesian variable selection approach and implement a stochastic search Gibbs sampler for posterior computation to obtain model-averaged estimates of quantities of interest such as marginal inclusion probabilities of predictors. Our methods are illustrated through simulation studies and application to data on weight gain during pregnancy, where it is of interest to identify important predictors of latent weight gain classes.
Complex fluids modeling and algorithms
Saramito, Pierre
2016-01-01
This book presents a comprehensive overview of the modeling of complex fluids, including many common substances, such as toothpaste, hair gel, mayonnaise, liquid foam, cement and blood, which cannot be described by Navier-Stokes equations. It also offers an up-to-date mathematical and numerical analysis of the corresponding equations, as well as several practical numerical algorithms and software solutions for the approximation of the solutions. It discusses industrial (molten plastics, forming process), geophysical (mud flows, volcanic lava, glaciers and snow avalanches), and biological (blood flows, tissues) modeling applications. This book is a valuable resource for undergraduate students and researchers in applied mathematics, mechanical engineering and physics.
The effect of muscle fatigue and low back pain on lumbar movement variability and complexity.
Bauer, C M; Rast, F M; Ernst, M J; Meichtry, A; Kool, J; Rissanen, S M; Suni, J H; Kankaanpää, M
2017-04-01
Changes in movement variability and complexity may reflect an adaptation strategy to fatigue. One unresolved question is whether this adaptation is hampered by the presence of low back pain (LBP). This study investigated if changes in movement variability and complexity after fatigue are influenced by the presence of LBP. It is hypothesised that pain free people and people suffering from LBP differ in their response to fatigue. The effect of an isometric endurance test on lumbar movement was tested in 27 pain free participants and 59 participants suffering from LBP. Movement variability and complexity were quantified with %determinism and sample entropy of lumbar angular displacement and velocity. Generalized linear models were fitted for each outcome. Bayesian estimation of the group-fatigue effect with 95% highest posterior density intervals (95%HPDI) was performed. After fatiguing %determinism decreased and sample entropy increased in the pain free group, compared to the LBP group. The corresponding group-fatigue effects were 3.7 (95%HPDI: 2.3-7.1) and -1.4 (95%HPDI: -2.7 to -0.1). These effects manifested in angular velocity, but not in angular displacement. The effects indicate that pain free participants showed more complex and less predictable lumbar movement with a lower degree of structure in its variability following fatigue while participants suffering from LBP did not. This may be physiological responses to avoid overload of fatigued tissue, increase endurance, or a consequence of reduced movement control caused by fatigue. Copyright © 2017 Elsevier Ltd. All rights reserved.
Marginal Maximum Likelihood Estimation of a Latent Variable Model with Interaction
Cudeck, Robert; Harring, Jeffrey R.; du Toit, Stephen H. C.
2009-01-01
There has been considerable interest in nonlinear latent variable models specifying interaction between latent variables. Although it seems to be only slightly more complex than linear regression without the interaction, the model that includes a product of latent variables cannot be estimated by maximum likelihood assuming normality.…
Boukraa, S; Maillard, J-M
2012-01-01
Lattice statistical mechanics, often provides a natural (holonomic) framework to perform singularity analysis with several complex variables that would, in a general mathematical framework, be too complex, or could not be defined. Considering several Picard-Fuchs systems of two-variables "above" Calabi-Yau ODEs, associated with double hypergeometric series, we show that holonomic functions are actually a good framework for actually finding the singular manifolds. We, then, analyse the singular algebraic varieties of the n-fold integrals $ \\chi^{(n)}$, corresponding to the decomposition of the magnetic susceptibility of the anisotropic square Ising model. We revisit a set of Nickellian singularities that turns out to be a two-parameter family of elliptic curves. We then find a first set of non-Nickellian singularities for $ \\chi^{(3)}$ and $ \\chi^{(4)}$, that also turns out to be rational or ellipic curves. We underline the fact that these singular curves depend on the anisotropy of the Ising model. We address...
Modelling variability in hospital bed occupancy.
Harrison, Gary W; Shafer, Andrea; Mackay, Mark
2005-11-01
A stochastic version of the Harrison-Millard multistage model of the flow of patients through a hospital division is developed in order to model correctly not only the average but also the variability in occupancy levels, since it is the variability that makes planning difficult and high percent occupancy levels increase the risk of frequent overflows. The model is fit to one year of data from the medical division of an acute care hospital in Adelaide, Australia. Admissions can be modeled as a Poisson process with rates varying by day of the week and by season. Methods are developed to use the entire annual occupancy profile to estimate transition rate parameters when admission rates are not constant and to estimate rate parameters that vary by day of the week and by season, which are necessary for the model variability to be as large as in the data. The final model matches well the mean, standard deviation and autocorrelation function of the occupancy data and also six months of data not used to estimate the parameters. Repeated simulations are used to construct percentiles of the daily occupancy distributions and thus identify ranges of normal fluctuations and those that are substantive deviations from the past, and also to investigate the trade-offs between frequency of overflows and the percent occupancy for both fixed and flexible bed allocations. Larger divisions can achieve more efficient occupancy levels than smaller ones with the same frequency of overflows. Seasonal variations are more significant than day-of-the-week variations and variable discharge rates are more significant than variable admission rates in contributing to overflows.
Interpolation of climate variables and temperature modeling
Samanta, Sailesh; Pal, Dilip Kumar; Lohar, Debasish; Pal, Babita
2012-01-01
Geographic Information Systems (GIS) and modeling are becoming powerful tools in agricultural research and natural resource management. This study proposes an empirical methodology for modeling and mapping of the monthly and annual air temperature using remote sensing and GIS techniques. The study area is Gangetic West Bengal and its neighborhood in the eastern India, where a number of weather systems occur throughout the year. Gangetic West Bengal is a region of strong heterogeneous surface with several weather disturbances. This paper also examines statistical approaches for interpolating climatic data over large regions, providing different interpolation techniques for climate variables' use in agricultural research. Three interpolation approaches, like inverse distance weighted averaging, thin-plate smoothing splines, and co-kriging are evaluated for 4° × 4° area, covering the eastern part of India. The land use/land cover, soil texture, and digital elevation model are used as the independent variables for temperature modeling. Multiple regression analysis with standard method is used to add dependent variables into regression equation. Prediction of mean temperature for monsoon season is better than winter season. Finally standard deviation errors are evaluated after comparing the predicted temperature and observed temperature of the area. For better improvement, distance from the coastline and seasonal wind pattern are stressed to be included as independent variables.
Modeling Variability in Immunocompetence and Immunoresponsiveness
Ask, B.; Waaij, van der E.H.; Bishop, S.C.
2008-01-01
The purposes of this paper were to 1) develop a stochastic model that would reflect observed variation between animals and across ages in immunocompetence and responsiveness; and 2) illustrate consequences of this variability for the statistical power of genotype comparisons and selection. A stochas
Doijad R
2007-01-01
Full Text Available Influence of processing variables on the solid-state of a model drug, piroxicam in cyclodextrin-based system and its effect on dissolution behavior of the drug was investigated in the present study. Binary systems containing piroxicam and hydroxypropyl-β -cyclodextrin prepared by various processes, were characterized by FTIR, thermal stability, photo stability and dissolution studies. Hydroxypropyl-β -cyclodextrin enhanced the solubility of piroxicam and increased dissolution rates from the binary systems. The complex prepared by co-evaporation method was found to yield better dissolution rate and stability as characterized in present study over those of the complex prepared by other methods.
Elementary theory of analytic functions of one or several complex variables
Cartan, Henri
1995-01-01
Noted mathematician offers basic treatment of theory of analytic functions of a complex variable, touching on analytic functions of several real or complex variables as well as the existence theorem for solutions of differential systems where data is analytic. Also included is a systematic, though elementary, exposition of theory of abstract complex manifolds of one complex dimension. Topics include power series in one variable, holomorphic functions, Cauchy's integral, more. Exercises. 1973 edition.
Random forest (RF) modeling has emerged as an important statistical learning method in ecology due to its exceptional predictive performance. However, for large and complex ecological datasets there is limited guidance on variable selection methods for RF modeling. Typically, e...
Complexity Metrics for Spreadsheet Models
Bregar, Andrej
2008-01-01
Several complexity metrics are described which are related to logic structure, data structure and size of spreadsheet models. They primarily concentrate on the dispersion of cell references and cell paths. Most metrics are newly defined, while some are adapted from traditional software engineering. Their purpose is the identification of cells which are liable to errors. In addition, they can be used to estimate the values of dependent process metrics, such as the development duration and effort, and especially to adjust the cell error rate in accordance with the contents of each individual cell, in order to accurately asses the reliability of a model. Finally, two conceptual constructs - the reference branching condition cell and the condition block - are discussed, aiming at improving the reliability, modifiability, auditability and comprehensibility of logical tests.
A first course in partial differential equations with complex variables and transform methods
Weinberger, H F
1995-01-01
Suitable for advanced undergraduate and graduate students, this text presents the general properties of partial differential equations, including the elementary theory of complex variables. Topics include one-dimensional wave equation, properties of elliptic and parabolic equations, separation of variables and Fourier series, nonhomogeneous problems, and analytic functions of a complex variable. Solutions. 1965 edition.
On automatic differentiation of codes with COMPLEX arithmetic with respect to real variables
Pusch, G.D.; Bischof, C. [Argonne National Lab., IL (United States); Carle, A. [Rice Univ., St. Houston, TX (United States)
1995-06-01
We explore what it means to apply automatic differentiation with respect to a set of real variables to codes containing complex arithmetic. That is, both dependent and independent variables with respect to differentiation are real variables, but in order to exploit features of complex mathematics, part of the code is expressed by employing complex arithmetic. We investigate how one can apply automatic differentiation to complex variables if one exploits the homomorphism of the complex numbers C onto R{sup 2}. It turns out that, by and large, the usual rules of differentiation apply, but subtle differences in special cases arise for sqrt (), abs (), and the power operator.
Simple nonlinear models suggest variable star universality
Lindner, John F; Kia, Behnam; Hippke, Michael; Learned, John G; Ditto, William L
2015-01-01
Dramatically improved data from observatories like the CoRoT and Kepler spacecraft have recently facilitated nonlinear time series analysis and phenomenological modeling of variable stars, including the search for strange (aka fractal) or chaotic dynamics. We recently argued [Lindner et al., Phys. Rev. Lett. 114 (2015) 054101] that the Kepler data includes "golden" stars, whose luminosities vary quasiperiodically with two frequencies nearly in the golden ratio, and whose secondary frequencies exhibit power-law scaling with exponent near -1.5, suggesting strange nonchaotic dynamics and singular spectra. Here we use a series of phenomenological models to make plausible the connection between golden stars and fractal spectra. We thereby suggest that at least some features of variable star dynamics reflect universal nonlinear phenomena common to even simple systems.
Dissecting magnetar variability with Bayesian hierarchical models
Huppenkothen, D; Hogg, D W; Murray, I; Frean, M; Elenbaas, C; Watts, A L; Levin, Y; van der Horst, A J; Kouveliotou, C
2015-01-01
Neutron stars are a prime laboratory for testing physical processes under conditions of strong gravity, high density, and extreme magnetic fields. Among the zoo of neutron star phenomena, magnetars stand out for their bursting behaviour, ranging from extremely bright, rare giant flares to numerous, less energetic recurrent bursts. The exact trigger and emission mechanisms for these bursts are not known; favoured models involve either a crust fracture and subsequent energy release into the magnetosphere, or explosive reconnection of magnetic field lines. In the absence of a predictive model, understanding the physical processes responsible for magnetar burst variability is difficult. Here, we develop an empirical model that decomposes magnetar bursts into a superposition of small spike-like features with a simple functional form, where the number of model components is itself part of the inference problem. The cascades of spikes that we model might be formed by avalanches of reconnection, or crust rupture afte...
Fluctuations in complex networks with variable dimensionality and heterogeneity
Yoo, H.-H.; Lee, D.-S.
2016-03-01
Synchronizing individual activities is essential for the stable functioning of diverse complex systems. Understanding the relation between dynamic fluctuations and the connection topology of substrates is therefore important, but it remains restricted to regular lattices. Here we investigate the fluctuation of loads, assigned to the locally least-loaded nodes, in the largest-connected components of heterogeneous networks while varying their link density and degree exponents. The load fluctuation becomes finite when the link density exceeds a finite threshold in weakly heterogeneous substrates, which coincides with the spectral dimension becoming larger than 2 as in the linear diffusion model. The fluctuation, however, diverges also in strongly heterogeneous networks with the spectral dimension larger than 2. This anomalous divergence is shown to be driven by large local fluctuations at hubs and their neighbors, scaling linearly with degree, which can give rise to diverging fluctuations at small-degree nodes. Our analysis framework can be useful for understanding and controlling fluctuations in real-world systems.
Environmental versus demographic variability in stochastic predator-prey models
Dobramysl, U.; Täuber, U. C.
2013-10-01
In contrast to the neutral population cycles of the deterministic mean-field Lotka-Volterra rate equations, including spatial structure and stochastic noise in models for predator-prey interactions yields complex spatio-temporal structures associated with long-lived erratic population oscillations. Environmental variability in the form of quenched spatial randomness in the predation rates results in more localized activity patches. Our previous study showed that population fluctuations in rare favorable regions in turn cause a remarkable increase in the asymptotic densities of both predators and prey. Very intriguing features are found when variable interaction rates are affixed to individual particles rather than lattice sites. Stochastic dynamics with demographic variability in conjunction with inheritable predation efficiencies generate non-trivial time evolution for the predation rate distributions, yet with overall essentially neutral optimization.
A Study of I-Function of Several Complex Variables
Prathima Jayarama; Vasudevan Nambisan Theke Madam; Shantha Kumari Kurumujji
2014-01-01
The aim of this paper is to introduce a natural generalization of the well-known, interesting, and useful Fox H-function into generalized function of several variables, namely, the I-function of ‘‘r’’ variables. For r=1, we get the I-function introduced and studied by Arjun Rathie (1997) and, for r=2, we get I-function of two variables introduced very recently by ShanthaKumari et al. (2012). Convergent conditions, elementary properties, and special cases have also been given. T...
Zhenhan TU; Zhonghua WANG
2013-01-01
This paper proves some uniqueness theorems for meromorphic mappings in several complex variables into the complex projective space PN(C) with truncated multiplicities,and our results improve some earlier work.
Exploring complex networks by means of two-variable time series of vertex observables
Oświȩcimka, Paweł; Drożdż, Stanisław
2016-01-01
We investigate the scaling of the cross-correlations calculated for two-variable time series containing vertex properties in the context of complex networks. Time series of such observables are obtained by means of stationary, unbiased random walks. We consider three vertex properties that provide, respectively, short, medium, and long-range information regarding the topological role of a vertex in a given network. We present and discuss results obtained on some well-known network models, as well as on real data representing protein contact networks. Our results suggest that the proposed analysis framework provides useful insights on the structural organization of complex networks. For instance, the analysis of protein contact networks reveals characteristics shared with both scale-free and small-world models.
The Model Checking Problem for Propositional Intuitionistic Logic with One Variable is AC1-Complete
Weiss, Martin Mundhenk And Felix
2010-01-01
We investigate the complexity of the model checking problem for propositional intuitionistic logic. We show that the model checking problem for intuitionistic logic with one variable is complete for logspace-uniform AC1, and for intuitionistic logic with two variables it is P-complete. For superintuitionistic logics with one variable, we obtain NC1-completeness for the model checking problem and for the tautology problem.
Generalized linear models for categorical and continuous limited dependent variables
Smithson, Michael
2013-01-01
Introduction and OverviewThe Nature of Limited Dependent VariablesOverview of GLMsEstimation Methods and Model EvaluationOrganization of This BookDiscrete VariablesBinary VariablesLogistic RegressionThe Binomial GLMEstimation Methods and IssuesAnalyses in R and StataExercisesNominal Polytomous VariablesMultinomial Logit ModelConditional Logit and Choice ModelsMultinomial Processing Tree ModelsEstimation Methods and Model EvaluationAnalyses in R and StataExercisesOrdinal Categorical VariablesModeling Ordinal Variables: Common Practice versus Best PracticeOrdinal Model AlternativesCumulative Mod
Teacher Modeling Using Complex Informational Texts
Fisher, Douglas; Frey, Nancy
2015-01-01
Modeling in complex texts requires that teachers analyze the text for factors of qualitative complexity and then design lessons that introduce students to that complexity. In addition, teachers can model the disciplinary nature of content area texts as well as word solving and comprehension strategies. Included is a planning guide for think aloud.
Quantifying Numerical Model Accuracy and Variability
Montoya, L. H.; Lynett, P. J.
2015-12-01
The 2011 Tohoku tsunami event has changed the logic on how to evaluate tsunami hazard on coastal communities. Numerical models are a key component for methodologies used to estimate tsunami risk. Model predictions are essential for the development of Tsunami Hazard Assessments (THA). By better understanding model bias and uncertainties and if possible minimizing them, a more accurate and reliable THA will result. In this study we compare runup height, inundation lines and flow velocity field measurements between GeoClaw and the Method Of Splitting Tsunami (MOST) predictions in the Sendai plain. Runup elevation and average inundation distance was in general overpredicted by the models. However, both models agree relatively well with each other when predicting maximum sea surface elevation and maximum flow velocities. Furthermore, to explore the variability and uncertainties in numerical models, MOST is used to compare predictions from 4 different grid resolutions (30m, 20m, 15m and 12m). Our work shows that predictions of particular products (runup and inundation lines) do not require the use of high resolution (less than 30m) Digital Elevation Maps (DEMs). When predicting runup heights and inundation lines, numerical convergence was achieved using the 30m resolution grid. On the contrary, poor convergence was found in the flow velocity predictions, particularly the 1 meter depth maximum flow velocities. Also, runup height measurements and elevations from the DEM were used to estimate model bias. The results provided in this presentation will help understand the uncertainties in model predictions and locate possible sources of errors within a model.
Comparative Analysis of Visco-elastic Models with Variable Parameters
Silviu Nastac
2010-01-01
Full Text Available The paper presents a theoretical comparative study for computational behaviour analysis of vibration isolation elements based on viscous and elastic models with variable parameters. The changing of elastic and viscous parameters can be produced by natural timed evolution demo-tion or by heating developed into the elements during their working cycle. It was supposed both linear and non-linear numerical viscous and elastic models, and their combinations. The results show the impor-tance of numerical model tuning with the real behaviour, as such the characteristics linearity, and the essential parameters for damping and rigidity. Multiple comparisons between linear and non-linear simulation cases dignify the basis of numerical model optimization regarding mathematical complexity vs. results reliability.
Croon, Marcel A.; van Veldhoven, Marc J. P. M.
2007-01-01
In multilevel modeling, one often distinguishes between macro-micro and micro-macro situations. In a macro-micro multilevel situation, a dependent variable measured at the lower level is predicted or explained by variables measured at that lower or a higher level. In a micro-macro multilevel situation, a dependent variable defined at the higher…
Modeling variability in porescale multiphase flow experiments
Ling, Bowen; Bao, Jie; Oostrom, Mart; Battiato, Ilenia; Tartakovsky, Alexandre M.
2017-07-01
Microfluidic devices and porescale numerical models are commonly used to study multiphase flow in biological, geological, and engineered porous materials. In this work, we perform a set of drainage and imbibition experiments in six identical microfluidic cells to study the reproducibility of multiphase flow experiments. We observe significant variations in the experimental results, which are smaller during the drainage stage and larger during the imbibition stage. We demonstrate that these variations are due to sub-porescale geometry differences in microcells (because of manufacturing defects) and variations in the boundary condition (i.e.,fluctuations in the injection rate inherent to syringe pumps). Computational simulations are conducted using commercial software STAR-CCM+, both with constant and randomly varying injection rate. Stochastic simulations are able to capture variability in the experiments associated with the varying pump injection rate.
Modeling variability in porescale multiphase flow experiments
Ling, Bowen; Bao, Jie; Oostrom, Mart; Battiato, Ilenia; Tartakovsky, Alexandre M.
2017-07-01
Microfluidic devices and porescale numerical models are commonly used to study multiphase flow in biological, geological, and engineered porous materials. In this work, we perform a set of drainage and imbibition experiments in six identical microfluidic cells to study the reproducibility of multiphase flow experiments. We observe significant variations in the experimental results, which are smaller during the drainage stage and larger during the imbibition stage. We demonstrate that these variations are due to sub-porescale geometry differences in microcells (because of manufacturing defects) and variations in the boundary condition (i.e., fluctuations in the injection rate inherent to syringe pumps). Computational simulations are conducted using commercial software STAR-CCM+, both with constant and randomly varying injection rates. Stochastic simulations are able to capture variability in the experiments associated with the varying pump injection rate.
A complex variable form of the HEG technique
Killingbeck, John P [Mathematics Department, University of Hull, Hull HU6 7RX (United Kingdom); Grosjean, Alain [Laboratoire d' Astrophysique de l' Observatoire de Besancon (CNRS, UMR 6091), 41 bis Avenue de l' Observatoire, BP 1615, 25010 Besancon Cedex (France); Jolicard, Georges [Laboratoire d' Astrophysique de l' Observatoire de Besancon (CNRS, UMR 6091), 41 bis Avenue de l' Observatoire, BP 1615, 25010 Besancon Cedex (France)
2005-10-21
A previously reported simple method for calculating complex matrix eigenvalues is modified to incorporate the traditional HEG approach for the case of even parity potentials. Two examples of resonance calculations are given. Our matrix and perturbation results agree with each other, but are not in full accord with previously published results for one of the test potentials. New results are given for the resonances of the inverted Gaussian potential. (letter to the editor)
Hafnium(IV) complexation with oxalate at variable temperatures
Friend, Mitchell T.; Wall, Nathalie A. [Washington State Univ., Pullmanm, WA (United States). Dept. of Chemistry
2017-08-01
Appropriate management of fission products in the reprocessing of spent nuclear fuel (SNF) is crucial in developing advanced reprocessing schemes. The addition of aqueous phase complexing agents can prevent the co-extraction of these fission products. A solvent extraction technique was used to study the complexation of Hf(IV) - an analog to fission product Zr(IV) - with oxalate at 15, 25, and 35 C in 1 M HClO{sub 4} utilizing a {sup 175+181}Hf radiotracer. The mechanism of the solvent extraction system of 10{sup -5} M Hf(IV) in 1 M HClO{sub 4} to thenoyltrifluoroacetone (TTA) in toluene demonstrated a 4{sup th}-power dependence in both TTA and H{sup +}, with Hf(TTA){sub 4} the only extractable species. The equilibrium constant for the extraction of Hf(TTA){sub 4} was determined to be log K{sub ex}=7.67±0.07 (25±1 C, 1 M HClO{sub 4}). The addition of oxalate to the aqueous phase decreased the distribution ratio, indicating aqueous Hf(IV)-oxalate complex formation. Polynomial fits to the distribution data identified the formation of Hf(ox){sup 2+} and Hf(ox){sub 2(aq)} and their stability constants were measured at 15, 25, and 35 C in 1 M HClO{sub 4}. van't Hoff analysis was used to calculate Δ{sub r}G, Δ{sub r}H, and Δ{sub r}S for these species. Stability constants were observed to increase at higher temperature, an indication that Hf(IV)-oxalate complexation is endothermic and driven by entropy.
Complex variables a physical approach with applications and Matlab
Krantz, Steven G
2007-01-01
PREFACEBASIC IDEAS Complex ArithmeticAlgebraic and Geometric PropertiesThe Exponential and ApplicationsHOLOMORPHIC AND HARMONIC FUNCTIONS Holomorphic FunctionsHolomorphic and Harmonic Functions Real and Complex Line Integrals Complex DifferentiabilityThe LogarithmTHE CAUCHY THEORY The Cauchy Integral TheoremVariants of the Cauchy Formula The Limitations of the Cauchy FormulaAPPLICATIONS OF THE CAUCHY THEORY The Derivatives of a Holomorphic FunctionThe Zeros of a Holomorphic FunctionISOLATED SINGULARITIES Behavior near an Isolated SingularityExpansion around Singular PointsExamples of Laurent ExpansionsThe Calculus of ResiduesApplications to the Calculation of IntegralsMeromorphic FunctionsTHE ARGUMENT PRINCIPLE Counting Zeros and PolesLocal Geometry of Functions Further Results on Zeros The Maximum PrincipleThe Schwarz LemmaTHE GEOMETRIC THEORY The Idea of a Conformal Mapping Mappings of the DiscLinear Fractional Transformations The Riemann Mapping Theorem Conformal Mappings of AnnuliA Compendium of Useful Co...
The Search for Candidate Relevant Subsets of Variables in Complex Systems.
Villani, M; Roli, A; Filisetti, A; Fiorucci, M; Poli, I; Serra, R
2015-01-01
We describe a method to identify relevant subsets of variables, useful to understand the organization of a dynamical system. The variables belonging to a relevant subset should have a strong integration with the other variables of the same relevant subset, and a much weaker interaction with the other system variables. On this basis, extending previous work on neural networks, an information-theoretic measure, the dynamical cluster index, is introduced in order to identify good candidate relevant subsets. The method does not require any previous knowledge of the relationships among the system variables, but relies on observations of their values over time. We show its usefulness in several application domains, including: (i) random Boolean networks, where the whole network is made of different subnetworks with different topological relationships (independent or interacting subnetworks); (ii) leader-follower dynamics, subject to noise and fluctuations; (iii) catalytic reaction networks in a flow reactor; (iv) the MAPK signaling pathway in eukaryotes. The validity of the method has been tested in cases where the data are generated by a known dynamical model and the dynamical cluster index is applied in order to uncover significant aspects of its organization; however, it is important that it can also be applied to time series coming from field data without any reference to a model. Given that it is based on relative frequencies of sets of values, the method could be applied also to cases where the data are not ordered in time. Several indications to improve the scope and effectiveness of the dynamical cluster index to analyze the organization of complex systems are finally given.
Bayesian modeling of measurement error in predictor variables
Fox, Gerardus J.A.; Glas, Cornelis A.W.
2003-01-01
It is shown that measurement error in predictor variables can be modeled using item response theory (IRT). The predictor variables, that may be defined at any level of an hierarchical regression model, are treated as latent variables. The normal ogive model is used to describe the relation between
Guyon, Hervé; Falissard, Bruno; Kop, Jean-Luc
2017-01-01
Network Analysis is considered as a new method that challenges Latent Variable models in inferring psychological attributes. With Network Analysis, psychological attributes are derived from a complex system of components without the need to call on any latent variables. But the ontological status of psychological attributes is not adequately defined with Network Analysis, because a psychological attribute is both a complex system and a property emerging from this complex system. The aim of this article is to reappraise the legitimacy of latent variable models by engaging in an ontological and epistemological discussion on psychological attributes. Psychological attributes relate to the mental equilibrium of individuals embedded in their social interactions, as robust attractors within complex dynamic processes with emergent properties, distinct from physical entities located in precise areas of the brain. Latent variables thus possess legitimacy, because the emergent properties can be conceptualized and analyzed on the sole basis of their manifestations, without exploring the upstream complex system. However, in opposition with the usual Latent Variable models, this article is in favor of the integration of a dynamic system of manifestations. Latent Variables models and Network Analysis thus appear as complementary approaches. New approaches combining Latent Network Models and Network Residuals are certainly a promising new way to infer psychological attributes, placing psychological attributes in an inter-subjective dynamic approach. Pragmatism-realism appears as the epistemological framework required if we are to use latent variables as representations of psychological attributes.
Capturing Complexity through Maturity Modelling
Underwood, Jean; Dillon, Gayle
2004-01-01
The impact of information and communication technologies (ICT) on the process and products of education is difficult to assess for a number of reasons. In brief, education is a complex system of interrelationships, of checks and balances. This context is not a neutral backdrop on which teaching and learning are played out. Rather, it may help, or…
Hummel, H.; Wolowicz, M.; Bogaards, R. H.
Genetic variability and relationships of populations of the cockles Cerastoderma edule and of the C. glaucum complex in Europe were determined by means of isoenzyme electrophoresis. Distinct isoenzyme markers allowed a clear distinction between these two taxa. C edule showed a higher genetic intra-population variability than the other cockle species. The imbalance of the genotypes within popuulation and the inter-population differentiation of the C. glaucum complex are stronger than in C. edule. The genetic variability is related to the different habitats of the species, the members of the C. glaucum complex living in more isolated areas and having more limited gene flow.
Peñagaricano, F; Valente, B D; Steibel, J P; Bates, R O; Ernst, C W; Khatib, H; Rosa, G J M
2015-10-01
Structural equation models (SEQM) can be used to model causal relationships between multiple variables in multivariate systems. Among the strengths of SEQM is its ability to consider causal links between latent variables. The use of latent variables allows modeling complex phenomena while reducing at the same time the dimensionality of the data. One relevant aspect in the quantitative genetics context is the possibility of correlated genetic effects influencing sets of variables under study. Under this scenario, if one aims at inferring causality among latent variables, genetic covariances act as confounders if ignored. Here we describe a methodology for assessing causal networks involving latent variables underlying complex phenotypic traits. The first step of the method consists of the construction of latent variables defined on the basis of prior knowledge and biological interest. These latent variables are jointly evaluated using confirmatory factor analysis. The estimated factor scores are then used as phenotypes for fitting a multivariate mixed model to obtain the covariance matrix of latent variables conditional on the genetic effects. Finally, causal relationships between the adjusted latent variables are evaluated using different SEQM with alternative causal specifications. We have applied this method to a data set with pigs for which several phenotypes were recorded over time. Five different latent variables were evaluated to explore causal links between growth, carcass, and meat quality traits. The measurement model, which included 5 latent variables capturing the information conveyed by 19 different phenotypic traits, showed an acceptable fit to data (e.g., χ/df = 1.3, root-mean-square error of approximation = 0.028, standardized root-mean-square residual = 0.041). Causal links between latent variables were explored after removing genetic confounders. Interestingly, we found that both growth (-0.160) and carcass traits (-0.500) have a significant
Choice of velocity variables for complex flow computation
Shyy, W.; Chang, G. C.
1991-01-01
The issue of adopting the velocity components as dependent velocity variables for the Navier-Stokes flow computations is investigated. The viewpoint advocated is that a numerical algorithm should preferably honor both the physical conservation law in differential form and the geometric conservation law in discrete form. With the use of Cartesian velocity vector, the momentum equations in curvilinear coordinates can retain the full conservation-law form and satisfy the physical conservation laws. With the curvilinear velocity components, source terms appear in differential equations and hence the full conservation law form can not be retained. In discrete expressions, algorithms based on the Cartesian components can satisfy the geometric conservation-law form for convection terms but not for viscous terms; those based on the curvilinear components, on the other hand, cannot satisfy the geometric conservation-law form for either convection or viscous terms. Several flow solutions for domain with 90 and 360 degree turnings are presented to illustrate the issues of using the Cartesian velocity components and the staggered grid arrangement.
Automatically Finding the Control Variables for Complex System Behavior
Gay, Gregory; Menzies, Tim; Davies, Misty; Gundy-Burlet, Karen
2010-01-01
Testing large-scale systems is expensive in terms of both time and money. Running simulations early in the process is a proven method of finding the design faults likely to lead to critical system failures, but determining the exact cause of those errors is still time-consuming and requires access to a limited number of domain experts. It is desirable to find an automated method that explores the large number of combinations and is able to isolate likely fault points. Treatment learning is a subset of minimal contrast-set learning that, rather than classifying data into distinct categories, focuses on finding the unique factors that lead to a particular classification. That is, they find the smallest change to the data that causes the largest change in the class distribution. These treatments, when imposed, are able to identify the factors most likely to cause a mission-critical failure. The goal of this research is to comparatively assess treatment learning against state-of-the-art numerical optimization techniques. To achieve this, this paper benchmarks the TAR3 and TAR4.1 treatment learners against optimization techniques across three complex systems, including two projects from the Robust Software Engineering (RSE) group within the National Aeronautics and Space Administration (NASA) Ames Research Center. The results clearly show that treatment learning is both faster and more accurate than traditional optimization methods.
Osteosarcoma models : understanding complex disease
Mohseny, Alexander Behzad
2012-01-01
A mesenchymal stem cell (MSC) based osteosarcoma model was established. The model provided evidence for a MSC origin of osteosarcoma. Normal MSCs transformed spontaneously to osteosarcoma-like cells which was always accompanied by genomic instability and loss of the Cdkn2a locus. Accordingly loss of
Osteosarcoma models : understanding complex disease
Mohseny, Alexander Behzad
2012-01-01
A mesenchymal stem cell (MSC) based osteosarcoma model was established. The model provided evidence for a MSC origin of osteosarcoma. Normal MSCs transformed spontaneously to osteosarcoma-like cells which was always accompanied by genomic instability and loss of the Cdkn2a locus. Accordingly loss of
Molecular simulation and modeling of complex I.
Hummer, Gerhard; Wikström, Mårten
2016-07-01
Molecular modeling and molecular dynamics simulations play an important role in the functional characterization of complex I. With its large size and complicated function, linking quinone reduction to proton pumping across a membrane, complex I poses unique modeling challenges. Nonetheless, simulations have already helped in the identification of possible proton transfer pathways. Simulations have also shed light on the coupling between electron and proton transfer, thus pointing the way in the search for the mechanistic principles underlying the proton pump. In addition to reviewing what has already been achieved in complex I modeling, we aim here to identify pressing issues and to provide guidance for future research to harness the power of modeling in the functional characterization of complex I. This article is part of a Special Issue entitled Respiratory complex I, edited by Volker Zickermann and Ulrich Brandt. Copyright © 2016 Elsevier B.V. All rights reserved.
Complex networks analysis in socioeconomic models
Varela, Luis M; Ausloos, Marcel; Carrete, Jesus
2014-01-01
This chapter aims at reviewing complex networks models and methods that were either developed for or applied to socioeconomic issues, and pertinent to the theme of New Economic Geography. After an introduction to the foundations of the field of complex networks, the present summary adds insights on the statistical mechanical approach, and on the most relevant computational aspects for the treatment of these systems. As the most frequently used model for interacting agent-based systems, a brief description of the statistical mechanics of the classical Ising model on regular lattices, together with recent extensions of the same model on small-world Watts-Strogatz and scale-free Albert-Barabasi complex networks is included. Other sections of the chapter are devoted to applications of complex networks to economics, finance, spreading of innovations, and regional trade and developments. The chapter also reviews results involving applications of complex networks to other relevant socioeconomic issues, including res...
Models of organometallic complexes for optoelectronic applications
Jacko, A C; Powell, B J
2010-01-01
Organometallic complexes have potential applications as the optically active components of organic light emitting diodes (OLEDs) and organic photovoltaics (OPV). Development of more effective complexes may be aided by understanding their excited state properties. Here we discuss two key theoretical approaches to investigate these complexes: first principles atomistic models and effective Hamiltonian models. We review applications of these methods, such as, determining the nature of the emitting state, predicting the fraction of injected charges that form triplet excitations, and explaining the sensitivity of device performance to small changes in the molecular structure of the organometallic complexes.
Residual Stress Sensitivity Analysis Using a Complex Variable Finite Element Method (Postprint)
2017-08-17
differencing FD), it takes advantage of complex variable algebra to eliminate the rror developed by traditional numerical differentiation methods such s...iterative procedure for solving nonlinear prob- ems. Through the use of complex variable algebra , ZFEM overcomes he inherent truncation errors that...DTIC Document; 1972 . [23] Davis R , Keith H . Finite-element analysis of pressure vessels. J Basic Eng 1972;94(2):401–5 . [24] Chen P . Prediction of
Three-dimensional potential flows from functions of a 3D complex variable
Kelly, Patrick; Panton, Ronald L.; Martin, E. D.
1990-01-01
Potential, or ideal, flow velocities can be found from the gradient of an harmonic function. An ordinary complex valued analytic function can be written as the sum of two real valued functions, both of which are harmonic. Thus, 2D complex valued functions serve as a source of functions that describe two-dimensional potential flows. However, this use of complex variables has been limited to two-dimensions. Recently, a new system of three-dimensional complex variables has been developed at the NASA Ames Research Center. As a step toward application of this theory to the analysis of 3D potential flow, several functions of a three-dimensional complex variable have been investigated. The results for two such functions, the 3D exponential and 3D logarithm, are presented in this paper. Potential flows found from these functions are investigated. Important characteristics of these flows fields are noted.
Tagarelli, Kaitlyn M.; Ruiz, Simón; Vega, José Luis Moreno; Rebuschat, Patrick
2016-01-01
Second language learning outcomes are highly variable, due to a variety of factors, including individual differences, exposure conditions, and linguistic complexity. However, exactly how these factors interact to influence language learning is unknown. This article examines the relationship between these three variables in language learners.…
Kajiwara, Tsuyoshi; Sasaki, Toru; Takeuchi, Yasuhiro
2015-02-01
We present a constructive method for Lyapunov functions for ordinary differential equation models of infectious diseases in vivo. We consider models derived from the Nowak-Bangham models. We construct Lyapunov functions for complex models using those of simpler models. Especially, we construct Lyapunov functions for models with an immune variable from those for models without an immune variable, a Lyapunov functions of a model with absorption effect from that for a model without absorption effect. We make the construction clear for Lyapunov functions proposed previously, and present new results with our method.
Variable Relation Parametric Model on Graphics Modelon for Collaboration Design
DONG Yu-de; ZHAO Han; LI Yan-feng
2005-01-01
A new approach to variable relation parametric model for collaboration design based on the graphic modelon has been put forward. The paper gives a parametric description model of graphic modelon, and relating method for different graphic modelon based on variable constraint. At the same time, with the aim of engineering application in the collaboration design, the autonmous constraint in modelon and relative constraint between two modelons are given. Finally, with the tool of variable and relation dbase, the solving method of variable relating and variable-driven among different graphic modelon in a part, and doubleacting variable relating parametric method among different parts for collaboration are given.
Rodríguez, Sara M; Valdivia, Nelson
2017-01-01
Parasites are essential components of natural communities, but the factors that generate skewed distributions of parasite occurrences and abundances across host populations are not well understood. Here, we analyse at a seascape scale the spatiotemporal relationships of parasite exposure and host body-size with the proportion of infected hosts (i.e., prevalence) and aggregation of parasite burden across ca. 150 km of the coast and over 22 months. We predicted that the effects of parasite exposure on prevalence and aggregation are dependent on host body-sizes. We used an indirect host-parasite interaction in which migratory seagulls, sandy-shore molecrabs, and an acanthocephalan worm constitute the definitive hosts, intermediate hosts, and endoparasite, respectively. In such complex systems, increments in the abundance of definitive hosts imply increments in intermediate hosts' exposure to the parasite's dispersive stages. Linear mixed-effects models showed a significant, albeit highly variable, positive relationship between seagull density and prevalence. This relationship was stronger for small (cephalothorax length >15 mm) than large molecrabs (<15 mm). Independently of seagull density, large molecrabs carried significantly more parasites than small molecrabs. The analysis of the variance-to-mean ratio of per capita parasite burden showed no relationship between seagull density and mean parasite aggregation across host populations. However, the amount of unexplained variability in aggregation was strikingly higher in larger than smaller intermediate hosts. This unexplained variability was driven by a decrease in the mean-variance scaling in heavily infected large molecrabs. These results show complex interdependencies between extrinsic and intrinsic population attributes on the structure of host-parasite interactions. We suggest that parasite accumulation-a characteristic of indirect host-parasite interactions-and subsequent increasing mortality rates over
Sara M. Rodríguez
2017-08-01
Full Text Available Background Parasites are essential components of natural communities, but the factors that generate skewed distributions of parasite occurrences and abundances across host populations are not well understood. Methods Here, we analyse at a seascape scale the spatiotemporal relationships of parasite exposure and host body-size with the proportion of infected hosts (i.e., prevalence and aggregation of parasite burden across ca. 150 km of the coast and over 22 months. We predicted that the effects of parasite exposure on prevalence and aggregation are dependent on host body-sizes. We used an indirect host-parasite interaction in which migratory seagulls, sandy-shore molecrabs, and an acanthocephalan worm constitute the definitive hosts, intermediate hosts, and endoparasite, respectively. In such complex systems, increments in the abundance of definitive hosts imply increments in intermediate hosts’ exposure to the parasite’s dispersive stages. Results Linear mixed-effects models showed a significant, albeit highly variable, positive relationship between seagull density and prevalence. This relationship was stronger for small (cephalothorax length >15 mm than large molecrabs (<15 mm. Independently of seagull density, large molecrabs carried significantly more parasites than small molecrabs. The analysis of the variance-to-mean ratio of per capita parasite burden showed no relationship between seagull density and mean parasite aggregation across host populations. However, the amount of unexplained variability in aggregation was strikingly higher in larger than smaller intermediate hosts. This unexplained variability was driven by a decrease in the mean-variance scaling in heavily infected large molecrabs. Conclusions These results show complex interdependencies between extrinsic and intrinsic population attributes on the structure of host-parasite interactions. We suggest that parasite accumulation—a characteristic of indirect host
Modelling the structure of complex networks
Herlau, Tue
networks has been independently studied as mathematical objects in their own right. As such, there has been both an increased demand for statistical methods for complex networks as well as a quickly growing mathematical literature on the subject. In this dissertation we explore aspects of modelling complex....... The next chapters will treat some of the various symmetries, representer theorems and probabilistic structures often deployed in the modelling complex networks, the construction of sampling methods and various network models. The introductory chapters will serve to provide context for the included written...
Scaffolding in Complex Modelling Situations
Stender, Peter; Kaiser, Gabriele
2015-01-01
The implementation of teacher-independent realistic modelling processes is an ambitious educational activity with many unsolved problems so far. Amongst others, there hardly exists any empirical knowledge about efficient ways of possible teacher support with students' activities, which should be mainly independent from the teacher. The research…
Chasnyk V. I.
2013-12-01
Full Text Available The conventional approach to calculating the space charge for the traveling-wave tube (TWT with phase velocity jumps is to use the same values of the depression coefficient as the ones for homogeneous helical TWTs. However, if the variable component of the exciting current in the expressions for determining the reduction coefficient is changed in amplitude, then the reduction factor is a complex value. Perhaps the neglect of this fact can significantly affect the volume discharge calculated value, and hence the non-synchronization parameter, for those of its values, which are characteristic of the TWT with a phase velocity jump. In this paper, formulas has been obtained for computation of real and imaginary parts of the complex reduction coefficient for a cylindrical electrons beam with exponential variable amplitude of variable current component in the TWT. Influence of complex reduction coefficient on the parameters of the TWT operating in the linear mode is estimated. It is shown that taking into account the imaginary part of the reduction coefficient for linear operation of the TWT makes it possible to change the estimated amount of space charge 1.5 to 2 times, which in its turn has quite a strong effect on the formation of the initial conditions of the nonlinear mode and, subsequently, on the output characteristics of the TWT.
Binary outcome variables and logistic regression models
Xinhua LIU
2011-01-01
Biomedical researchers often study binary variables that indicate whether or not a specific event,such as remission of depression symptoms,occurs during the study period.The indicator variable Y takes two values,usually coded as one if the event (remission) is present and zero if the event is not present(non-remission).Let p be the probability that the event occurs ( Y =1),then 1-p will be the probability that the event does not occur ( Y =0).
Variables related to surgical and nonsurgical treatment of zygomatic complex fracture.
Olate, Sergio; Lima, Sergio Monteiro; Sawazaki, Renato; Moreira, Roger William Fernandez; de Moraes, Márcio
2011-07-01
The aim of this retrospective research was to establish the association between variables for the surgical treatment of zygomatic complex (ZC) fractures. In a 10-year period, 532 patients were examined for ZC fractures. The medical records of patients were analyzed to obtain information related to sociodemographic characteristics, trauma etiology, sign and symptoms of patients, and surgical or nonsurgical treatment. Statistical analysis was performed using χ test with statistical significance of P complex variables can be associated to surgical treatment; however, variables as comminuted fracture and alteration of occlusion were associated to surgical treatment indications.
Variable cluster analysis method for building neural network model
王海东; 刘元东
2004-01-01
To address the problems that input variables should be reduced as much as possible and explain output variables fully in building neural network model of complicated system, a variable selection method based on cluster analysis was investigated. Similarity coefficient which describes the mutual relation of variables was defined. The methods of the highest contribution rate, part replacing whole and variable replacement are put forwarded and deduced by information theory. The software of the neural network based on cluster analysis, which can provide many kinds of methods for defining variable similarity coefficient, clustering system variable and evaluating variable cluster, was developed and applied to build neural network forecast model of cement clinker quality. The results show that all the network scale, training time and prediction accuracy are perfect. The practical application demonstrates that the method of selecting variables for neural network is feasible and effective.
Mixture model analysis of complex samples
Wedel, M; ter Hofstede, F; Steenkamp, JBEM
1998-01-01
We investigate the effects of a complex sampling design on the estimation of mixture models. An approximate or pseudo likelihood approach is proposed to obtain consistent estimates of class-specific parameters when the sample arises from such a complex design. The effects of ignoring the sample desi
USING STRUCTURAL EQUATION MODELING TO INVESTIGATE RELATIONSHIPS AMONG ECOLOGICAL VARIABLES
This paper gives an introductory account of Structural Equation Modeling (SEM) and demonstrates its application using LISRELmodel utilizing environmental data. Using nine EMAP data variables, we analyzed their correlation matrix with an SEM model. The model characterized...
Variable-complexity aerodynamic-structural design of a high-speed civil transport wing
Hutchison, M. G.; Huang, X.; Mason, W. H.; Haftka, R. T.; Grossman, B.
1992-01-01
A variable-complexity strategy of combining simple and detailed analysis methods is presented for the design optimization of a high-speed civil transport (HSCT) wing. Two sets of results are shown: the aerodynamic design of the wing using algebraic weight equations for structural considerations, and optimization results of the internal wing structure for a fixed wing configuration. We show example results indicating that using simple analysis methods alone for the calculation of a critical constraint can allow an optimizer to exploit weaknesses in the analysis. The structural optimization results provide a valuable check for the weight equations used in the aerodynamic design. In addition, these results confirm the need for using simple, algebraic models in conjunction with more detailed analysis methods. A strategy of interlaced aerodynanic-structural design is proposed.
Atlases: Complex models of geospace
Ikonović Vesna
2005-01-01
Full Text Available Atlas is modeled contexture contents of treated thematic of space on optimal map union. Atlases are higher form of cartography. Atlases content composition of maps which are different by projection, scale, format methods, contents, usage and so. Atlases can be classified by multi criteria. Modern classification of atlases by technology of making would be on: 1. classical or traditional (printed on paper and 2. electronic (made on electronic media - computer or computer station. Electronic atlases divided in three large groups: view-only electronic atlases, 2. interactive electronic atlases and 3. analytical electronic atlases.
Numerical models of complex diapirs
Podladchikov, Yu.; Talbot, C.; Poliakov, A. N. B.
1993-12-01
Numerically modelled diapirs that rise into overburdens with viscous rheology produce a large variety of shapes. This work uses the finite-element method to study the development of diapirs that rise towards a surface on which a diapir-induced topography creeps flat or disperses ("erodes") at different rates. Slow erosion leads to diapirs with "mushroom" shapes, moderate erosion rate to "wine glass" diapirs and fast erosion to "beer glass"- and "column"-shaped diapirs. The introduction of a low-viscosity layer at the top of the overburden causes diapirs to develop into structures resembling a "Napoleon hat". These spread lateral sheets.
Thermodynamic modeling of complex systems
Liang, Xiaodong
Offshore reservoirs represent one of the major growth areas of the oil and gas industry, and environmental safety is one of the biggest challenges for the offshore exploration and production. The oil accidents in the Gulf of Mexico in 1979 and 2010 were two of the biggest disasters in history...... after an oil spill. Engineering thermodynamics could be applied in the state-of-the-art sonar products through advanced artificial technology, if the speed of sound, solubility and density of oil-seawater systems could be satisfactorily modelled. The addition of methanol or glycols into unprocessed well...
Bär, Karl-Jürgen; Koschke, Mandy; Berger, Sandy; Schulz, Steffen; Tancer, Manuel; Voss, Andreas; Yeragani, Vikram K
2008-12-01
Previous studies have shown that untreated patients with acute schizophrenia present with reduced heart rate variability and complexity as well as increased QT variability. This autonomic dysregulation might contribute to increased cardiac morbidity and mortality in these patients. However, the additional effects of newer antipsychotics on autonomic dysfunction have not been investigated, applying these new cardiac parameters to gain information about the regulation at sinus node level as well as the susceptibility to arrhythmias. We have investigated 15 patients with acute schizophrenia before and after established olanzapine treatment and compared them with matched controls. New nonlinear parameters (approximate entropy, compression entropy, fractal dimension) of heart rate variability and also the QT-variability index were calculated. In accordance with previous results, we have observed reduced complexity of heart rate regulation in untreated patients. Furthermore, the QT-variability index was significantly increased in unmedicated patients, indicating increased repolarization lability. Reduction of the heart rate regulation complexity after olanzapine treatment was seen, as measured by compression entropy of heart rate. No change in QT variability was observed after treatment. This study shows that unmedicated patients with acute schizophrenia experience autonomic dysfunction. Olanzapine treatment seems to have very little additional impact in regard to the QT variability. However, the decrease in heart rate complexity after olanzapine treatment suggests decreased cardiac vagal function, which may increase the risk for cardiac mortality. Further studies are warranted to gain more insight into cardiac regulation in schizophrenia and the effect of novel antipsychotics.
Pérez‐Rodríguez, Paulino; Veturi, Yogasudha; Simianer, Henner; de los Campos, Gustavo
2015-01-01
Summary Genome‐wide association studies (GWAS) have detected large numbers of variants associated with complex human traits and diseases. However, the proportion of variance explained by GWAS‐significant single nucleotide polymorphisms has been usually small. This brought interest in the use of whole‐genome regression (WGR) methods. However, there has been limited research on the factors that affect prediction accuracy (PA) of WGRs when applied to human data of distantly related individuals. Here, we examine, using real human genotypes and simulated phenotypes, how trait complexity, marker‐quantitative trait loci (QTL) linkage disequilibrium (LD), and the model used affect the performance of WGRs. Our results indicated that the estimated rate of missing heritability is dependent on the extent of marker‐QTL LD. However, this parameter was not greatly affected by trait complexity. Regarding PA our results indicated that: (a) under perfect marker‐QTL LD WGR can achieve moderately high prediction accuracy, and with simple genetic architectures variable selection methods outperform shrinkage procedures and (b) under imperfect marker‐QTL LD, variable selection methods can achieved reasonably good PA with simple or moderately complex genetic architectures; however, the PA of these methods deteriorated as trait complexity increases and with highly complex traits variable selection and shrinkage methods both performed poorly. This was confirmed with an analysis of human height. PMID:25600682
Modeling complex work systems - method meets reality
Veer, van der, C.G.; Hoeve, Machteld; Lenting, Bert F.
1996-01-01
Modeling an existing task situation is often a first phase in the (re)design of information systems. For complex systems design, this model should consider both the people and the organization involved, the work, and situational aspects. Groupware Task Analysis (GTA) as part of a method for the design of complex systems, has been applied in a situation of redesign of a Dutch public administration system. The most feasible method to collect information in this case was ethnography, the resulti...
程玉民; 刘超; 白福浓; 彭妙娟
2015-01-01
In this paper, based on the conjugate of the complex basis function, a new complex variable moving least-squares approximation is discussed. Then using the new approximation to obtain the shape function, an improved complex vari-able element-free Galerkin (ICVEFG) method is presented for two-dimensional (2D) elastoplasticity problems. Compared with the previous complex variable moving least-squares approximation, the new approximation has greater computational precision and efficiency. Using the penalty method to apply the essential boundary conditions, and using the constrained Galerkin weak form of 2D elastoplasticity to obtain the system equations, we obtain the corresponding formulae of the ICVEFG method for 2D elastoplasticity. Three selected numerical examples are presented using the ICVEFG method to show that the ICVEFG method has the advantages such as greater precision and computational efficiency over the conven-tional meshless methods.
A Data Flow Behavior Constraints Model for Branch Decisionmaking Variables
Lu Yan
2012-06-01
Full Text Available In order to detect the attacks to decision-making variable, this paper presents a data flow behavior constraint model for branch decision-making variables. Our model is expanded from the common control flow model, itemphasizes on the analysis and verification about the data flow for decision-making variables, so that to ensure the branch statement can execute correctly and can also detect the attack to branch decision-making variableeasily. The constraints of our model include the collection of variables, the statements that the decision-making variables are dependent on and the data flow constraint with the use-def relation of these variables. Our experimental results indicate that it is effective in detecting the attacks to branch decision-making variables as well as the attacks to control-data.
Fatigue modeling of materials with complex microstructures
Qing, Hai; Mishnaevsky, Leon
2011-01-01
A new approach and method of the analysis of microstructure-lifetime relationships of materials with complex structures is presented. The micromechanical multiscale computational analysis of damage evolution in materials with complex hierarchical microstructures is combined with the phenomenologi......A new approach and method of the analysis of microstructure-lifetime relationships of materials with complex structures is presented. The micromechanical multiscale computational analysis of damage evolution in materials with complex hierarchical microstructures is combined...... with the phenomenological model of fatigue damage growth. As a result, the fatigue lifetime of materials with complex structures can be determined as a function of the parameters of their structures. As an example, the fatigue lifetimes of wood modeled as a cellular material with multilayered, fiber reinforced walls were...
Preferential urn model and nongrowing complex networks.
Ohkubo, Jun; Yasuda, Muneki; Tanaka, Kazuyuki
2005-12-01
A preferential urn model, which is based on the concept "the rich get richer," is proposed. From a relationship between a nongrowing model for complex networks and the preferential urn model in regard to degree distributions, it is revealed that a fitness parameter in the nongrowing model is interpreted as an inverse local temperature in the preferential urn model. Furthermore, it is clarified that the preferential urn model with randomness generates a fat-tailed occupation distribution; the concept of the local temperature enables us to understand the fat-tailed occupation distribution intuitively. Since the preferential urn model is a simple stochastic model, it can be applied to research on not only the nongrowing complex networks, but also many other fields such as econophysics and social sciences.
The Kuramoto model in complex networks
Rodrigues, Francisco A; Ji, Peng; Kurths, Jürgen
2016-01-01
Synchronization of an ensemble of oscillators is an emergent phenomenon present in several complex systems, ranging from social and physical to biological and technological systems. The most successful approach to describe how coherent behavior emerges in these complex systems is given by the paradigmatic Kuramoto model. This model has been traditionally studied in complete graphs. However, besides being intrinsically dynamical, complex systems present very heterogeneous structure, which can be represented as complex networks. This report is dedicated to review main contributions in the field of synchronization in networks of Kuramoto oscillators. In particular, we provide an overview of the impact of network patterns on the local and global dynamics of coupled phase oscillators. We cover many relevant topics, which encompass a description of the most used analytical approaches and the analysis of several numerical results. Furthermore, we discuss recent developments on variations of the Kuramoto model in net...
Stochastic modeling of interannual variation of hydrologic variables
Dralle, David; Karst, Nathaniel; Müller, Marc; Vico, Giulia; Thompson, Sally E.
2017-07-01
Quantifying the interannual variability of hydrologic variables (such as annual flow volumes, and solute or sediment loads) is a central challenge in hydrologic modeling. Annual or seasonal hydrologic variables are themselves the integral of instantaneous variations and can be well approximated as an aggregate sum of the daily variable. Process-based, probabilistic techniques are available to describe the stochastic structure of daily flow, yet estimating interannual variations in the corresponding aggregated variable requires consideration of the autocorrelation structure of the flow time series. Here we present a method based on a probabilistic streamflow description to obtain the interannual variability of flow-derived variables. The results provide insight into the mechanistic genesis of interannual variability of hydrologic processes. Such clarification can assist in the characterization of ecosystem risk and uncertainty in water resources management. We demonstrate two applications, one quantifying seasonal flow variability and the other quantifying net suspended sediment export.
A NUI Based Multiple Perspective Variability Modelling CASE Tool
Bashroush, Rabih
2010-01-01
With current trends towards moving variability from hardware to \\ud software, and given the increasing desire to postpone design decisions as much \\ud as is economically feasible, managing the variability from requirements \\ud elicitation to implementation is becoming a primary business requirement in the \\ud product line engineering process. One of the main challenges in variability \\ud management is the visualization and management of industry size variability \\ud models. In this demonstrat...
Multi-Wheat-Model Ensemble Responses to Interannual Climate Variability
Ruane, Alex C.; Hudson, Nicholas I.; Asseng, Senthold; Camarrano, Davide; Ewert, Frank; Martre, Pierre; Boote, Kenneth J.; Thorburn, Peter J.; Aggarwal, Pramod K.; Angulo, Carlos
2016-01-01
We compare 27 wheat models' yield responses to interannual climate variability, analyzed at locations in Argentina, Australia, India, and The Netherlands as part of the Agricultural Model Intercomparison and Improvement Project (AgMIP) Wheat Pilot. Each model simulated 1981e2010 grain yield, and we evaluate results against the interannual variability of growing season temperature, precipitation, and solar radiation. The amount of information used for calibration has only a minor effect on most models' climate response, and even small multi-model ensembles prove beneficial. Wheat model clusters reveal common characteristics of yield response to climate; however models rarely share the same cluster at all four sites indicating substantial independence. Only a weak relationship (R2 0.24) was found between the models' sensitivities to interannual temperature variability and their response to long-termwarming, suggesting that additional processes differentiate climate change impacts from observed climate variability analogs and motivating continuing analysis and model development efforts.
Rao, Linfeng
2007-06-01
Studies of actinide complexation in solution at elevated temperatures provide insight into the effect of solvation and the energetics of complexation, and help to predict the chemical behavior of actinides in nuclear waste processing and disposal where temperatures are high. This tutorial review summarizes the data on the complexation of actinides at elevated temperatures and describes the methodology for thermodynamic measurements, with the emphasis on variable-temperature titration calorimetry, a highly valuable technique to determine the enthalpy and, under appropriate conditions, the equilibrium constants of complexation as well.
ABOUT PSYCHOLOGICAL VARIABLES IN APPLICATION SCORING MODELS
Pablo Rogers
2015-01-01
Full Text Available The purpose of this study is to investigate the contribution of psychological variables and scales suggested by Economic Psychology in predicting individuals’ default. Therefore, a sample of 555 individuals completed a self-completion questionnaire, which was composed of psychological variables and scales. By adopting the methodology of the logistic regression, the following psychological and behavioral characteristics were found associated with the group of individuals in default: a negative dimensions related to money (suffering, inequality and conflict; b high scores on the self-efficacy scale, probably indicating a greater degree of optimism and over-confidence; c buyers classified as compulsive; d individuals who consider it necessary to give gifts to children and friends on special dates, even though many people consider this a luxury; e problems of self-control identified by individuals who drink an average of more than four glasses of alcoholic beverage a day.
From Complex to Simple: Interdisciplinary Stochastic Models
Mazilu, D. A.; Zamora, G.; Mazilu, I.
2012-01-01
We present two simple, one-dimensional, stochastic models that lead to a qualitative understanding of very complex systems from biology, nanoscience and social sciences. The first model explains the complicated dynamics of microtubules, stochastic cellular highways. Using the theory of random walks in one dimension, we find analytical expressions…
Modeling complex work systems - method meets reality
van der Veer, Gerrit C.; Hoeve, Machteld; Lenting, Bert
1996-01-01
Modeling an existing task situation is often a first phase in the (re)design of information systems. For complex systems design, this model should consider both the people and the organization involved, the work, and situational aspects. Groupware Task Analysis (GTA) as part of a method for the
Modeling complex work systems - method meets reality
Veer, van der Gerrit C.; Hoeve, Machteld; Lenting, Bert F.
1996-01-01
Modeling an existing task situation is often a first phase in the (re)design of information systems. For complex systems design, this model should consider both the people and the organization involved, the work, and situational aspects. Groupware Task Analysis (GTA) as part of a method for the desi
A differential model of the complex cell.
Hansard, Miles; Horaud, Radu
2011-09-01
The receptive fields of simple cells in the visual cortex can be understood as linear filters. These filters can be modeled by Gabor functions or gaussian derivatives. Gabor functions can also be combined in an energy model of the complex cell response. This letter proposes an alternative model of the complex cell, based on gaussian derivatives. It is most important to account for the insensitivity of the complex response to small shifts of the image. The new model uses a linear combination of the first few derivative filters, at a single position, to approximate the first derivative filter, at a series of adjacent positions. The maximum response, over all positions, gives a signal that is insensitive to small shifts of the image. This model, unlike previous approaches, is based on the scale space theory of visual processing. In particular, the complex cell is built from filters that respond to the 2D differential structure of the image. The computational aspects of the new model are studied in one and two dimensions, using the steerability of the gaussian derivatives. The response of the model to basic images, such as edges and gratings, is derived formally. The response to natural images is also evaluated, using statistical measures of shift insensitivity. The neural implementation and predictions of the model are discussed.
A Sequence of Relaxations Constraining Hidden Variable Models
Steeg, Greg Ver
2011-01-01
Many widely studied graphical models with latent variables lead to nontrivial constraints on the distribution of the observed variables. Inspired by the Bell inequalities in quantum mechanics, we refer to any linear inequality whose violation rules out some latent variable model as a "hidden variable test" for that model. Our main contribution is to introduce a sequence of relaxations which provides progressively tighter hidden variable tests. We demonstrate applicability to mixtures of sequences of i.i.d. variables, Bell inequalities, and homophily models in social networks. For the last, we demonstrate that our method provides a test that is able to rule out latent homophily as the sole explanation for correlations on a real social network that are known to be due to influence.
Complex-temperature singularities of Ising models
Shrock, R E
1995-01-01
We report new results on complex-temperature properties of Ising models. These include studies of the s=1/2 model on triangular, honeycomb, kagom\\'e, 3 \\cdot 12^2, and 4 \\cdot 8^2 lattices. We elucidate the complex--T phase diagrams of the higher-spin 2D Ising models, using calculations of partition function zeros. Finally, we investigate the 2D Ising model in an external magnetic field, mapping the complex--T phase diagram and exploring various singularities therein. For the case \\beta H=i\\pi/2, we give exact results on the phase diagram and obtain susceptibility exponents \\gamma' at various singularities from low-temperature series analyses.
Liming Cai
2010-01-01
Full Text Available The multistate life table (MSLT model is an important demographic method to document life cycle processes. In this study, we present the SPACE (Stochastic Population Analysis for Complex Events program to estimate MSLT functions and their sampling variability. It has several advantages over other programs, including the use of microsimulation and the bootstrap method to estimate the sampling variability. Simulation enables researchers to analyze a broader array of statistics than the deterministic approach, and may be especially advantageous in investigating distributions of MSLT functions. The bootstrap method takes sample design into account to correct the potential bias in variance estimates.
Stalker, J.R.; Bossert, J.E.; Reisner, J.M.
1998-12-31
This study is part of an ongoing research effort at Los Alamos to understand the hydrologic cycle at regional scales by coupling atmospheric, land surface, river channel, and groundwater models. In this study the authors examine how local variation of heights of the two mountain ranges representative of those that surround the Rio Grande Valley affects precipitation. The lack of observational data to adequately assess precipitation variability in complex terrain, and the lack of previous work has prompted this modeling study. Thus, it becomes imperative to understand how the local terrain affects snow accumulations and rainfall during winter and summer seasons respectively so as to manage this valuable resource in this semi-arid region. While terrain is three dimensional, simplifying the problem to two dimensions can provide some valuable insight into topographic effects that may exist at various transects across the Rio Grande Valley. The authors induce these topographic effects by introducing variations in heights of the mountains and the width of the valley using an analytical function for the topography. The Regional Atmospheric Modeling System (RAMS) is used to examine these effects.
Updating the debate on model complexity
Simmons, Craig T.; Hunt, Randall J.
2012-01-01
As scientists who are trying to understand a complex natural world that cannot be fully characterized in the field, how can we best inform the society in which we live? This founding context was addressed in a special session, “Complexity in Modeling: How Much is Too Much?” convened at the 2011 Geological Society of America Annual Meeting. The session had a variety of thought-provoking presentations—ranging from philosophy to cost-benefit analyses—and provided some areas of broad agreement that were not evident in discussions of the topic in 1998 (Hunt and Zheng, 1999). The session began with a short introduction during which model complexity was framed borrowing from an economic concept, the Law of Diminishing Returns, and an example of enjoyment derived by eating ice cream. Initially, there is increasing satisfaction gained from eating more ice cream, to a point where the gain in satisfaction starts to decrease, ending at a point when the eater sees no value in eating more ice cream. A traditional view of model complexity is similar—understanding gained from modeling can actually decrease if models become unnecessarily complex. However, oversimplified models—those that omit important aspects of the problem needed to make a good prediction—can also limit and confound our understanding. Thus, the goal of all modeling is to find the “sweet spot” of model sophistication—regardless of whether complexity was added sequentially to an overly simple model or collapsed from an initial highly parameterized framework that uses mathematics and statistics to attain an optimum (e.g., Hunt et al., 2007). Thus, holistic parsimony is attained, incorporating “as simple as possible,” as well as the equally important corollary “but no simpler.”
Usability Evaluation of Variability Modeling by means of Common Variability Language
Jorge Echeverria
2015-12-01
Full Text Available Common Variability Language (CVL is a recent proposal for OMG's upcoming Variability Modeling standard. CVL models variability in terms of Model Fragments. Usability is a widely-recognized quality criterion essential to warranty the successful use of tools that put these ideas in practice. Facing the need of evaluating the usability of CVL modeling tools, this paper presents a Usability Evaluation of CVL applied to a Modeling Tool for firmware code of Induction Hobs. This evaluation addresses the configuration, scoping and visualization facets. The evaluation involved the end users of the tool whom are engineers of our Induction Hob industrial partner. Effectiveness and efficiency results indicate that model configuration in terms of model fragment substitutions is intuitive enough but both scoping and visualization require improved tool support. Results also enabled us to identify a list of usability problems which may contribute to alleviate scoping and visualization issues in CVL.
A new complex variable element-free Galerkin method for two-dimensional potential problems
Cheng Yu-Min; Wang Jian-Fei; Bai Fu-Nong
2012-01-01
In this paper,based on the element-free Galerkin (EFG) method and the improved complex variable moving least-square (ICVMLS) approximation,a new meshless method,which is the improved complex variable element-free Galerkin (ICVEFG) method for two-dimensional potential problems,is presented. In the method,the integral weak form of control equations is employed,and the Lagrange multiplier is used to apply the essential boundary conditions.Then the corresponding formulas of the ICVEFG method for two-dimensional potential problems are obtained.Compared with the complex variable moving least-square (CVMLS) approximation proposed by Cheng,the functional in the ICVMLS approximation has an explicit physical meaning.Furthermore,the ICVEFG method has greater computational precision and efficiency.Three numerical examples are given to show the validity of the proposed method.
An improved complex variable element-free Galerkin method for two-dimensional elasticity problems
Bai Fu-Nong; Li Dong-Ming; Wang Jian-Fei; Cheng Yu-Min
2012-01-01
In this paper,the improved complex variable moving least-squares (ICVMLS) approximation is presented.The ICVMLS approximation has an explicit physics meaning.Compared with the complex variable moving least-squares (CVMLS) approximations presented by Cheng and Ren,the ICVMLS approximation has a great computational precision and efficiency. Based on the element-free Galerkin (EFG) method and the ICVMLS approximation,the improved complex variable element-free Galerkin (ICVEFG) method is presented for two-dimensional elasticity problems,and the corresponding formulae are obtained.Compared with the conventional EFG method,the ICVEFG method has a great computational accuracy and efficiency.For the purpose of demonstration,three selected numerical examples are solved using the ICVEFG method.
Balancing model complexity and measurements in hydrology
Van De Giesen, N.; Schoups, G.; Weijs, S. V.
2012-12-01
The Data Processing Inequality implies that hydrological modeling can only reduce, and never increase, the amount of information available in the original data used to formulate and calibrate hydrological models: I(X;Z(Y)) ≤ I(X;Y). Still, hydrologists around the world seem quite content building models for "their" watersheds to move our discipline forward. Hydrological models tend to have a hybrid character with respect to underlying physics. Most models make use of some well established physical principles, such as mass and energy balances. One could argue that such principles are based on many observations, and therefore add data. These physical principles, however, are applied to hydrological models that often contain concepts that have no direct counterpart in the observable physical universe, such as "buckets" or "reservoirs" that fill up and empty out over time. These not-so-physical concepts are more like the Artificial Neural Networks and Support Vector Machines of the Artificial Intelligence (AI) community. Within AI, one quickly came to the realization that by increasing model complexity, one could basically fit any dataset but that complexity should be controlled in order to be able to predict unseen events. The more data are available to train or calibrate the model, the more complex it can be. Many complexity control approaches exist in AI, with Solomonoff inductive inference being one of the first formal approaches, the Akaike Information Criterion the most popular, and Statistical Learning Theory arguably being the most comprehensive practical approach. In hydrology, complexity control has hardly been used so far. There are a number of reasons for that lack of interest, the more valid ones of which will be presented during the presentation. For starters, there are no readily available complexity measures for our models. Second, some unrealistic simplifications of the underlying complex physics tend to have a smoothing effect on possible model
Basic relations for the period variation models of variable stars
Mikulášek, Zdeněk; Gráf, Tomáš; Zejda, Miloslav; Zhu, Liying; Qian, Shen-Bang
2012-01-01
Models of period variations are basic tools for period analyzes of variable stars. We introduce phase function and instant period and formulate basic relations and equations among them. Some simple period models are also presented.
Fixed transaction costs and modelling limited dependent variables
Hempenius, A.L.
1994-01-01
As an alternative to the Tobit model, for vectors of limited dependent variables, I suggest a model, which follows from explicitly using fixed costs, if appropriate of course, in the utility function of the decision-maker.
Methods for Handling Missing Variables in Risk Prediction Models
Held, Ulrike; Kessels, Alfons; Aymerich, Judith Garcia; Basagana, Xavier; ter Riet, Gerben; Moons, Karel G. M.; Puhan, Milo A.
2016-01-01
Prediction models should be externally validated before being used in clinical practice. Many published prediction models have never been validated. Uncollected predictor variables in otherwise suitable validation cohorts are the main factor precluding external validation.We used individual patient
Defining a Family of Cognitive Diagnosis Models Using Log-Linear Models with Latent Variables
Henson, Robert A.; Templin, Jonathan L.; Willse, John T.
2009-01-01
This paper uses log-linear models with latent variables (Hagenaars, in "Loglinear Models with Latent Variables," 1993) to define a family of cognitive diagnosis models. In doing so, the relationship between many common models is explicitly defined and discussed. In addition, because the log-linear model with latent variables is a general model for…
The Properties of Model Selection when Retaining Theory Variables
Hendry, David F.; Johansen, Søren
Economic theories are often fitted directly to data to avoid possible model selection biases. We show that embedding a theory model that specifies the correct set of m relevant exogenous variables, x{t}, within the larger set of m+k candidate variables, (x{t},w{t}), then selection over the second...
Complexity, Modeling, and Natural Resource Management
Paul Cilliers
2013-09-01
Full Text Available This paper contends that natural resource management (NRM issues are, by their very nature, complex and that both scientists and managers in this broad field will benefit from a theoretical understanding of complex systems. It starts off by presenting the core features of a view of complexity that not only deals with the limits to our understanding, but also points toward a responsible and motivating position. Everything we do involves explicit or implicit modeling, and as we can never have comprehensive access to any complex system, we need to be aware both of what we leave out as we model and of the implications of the choice of our modeling framework. One vantage point is never sufficient, as complexity necessarily implies that multiple (independent conceptualizations are needed to engage the system adequately. We use two South African cases as examples of complex systems - restricting the case narratives mainly to the biophysical domain associated with NRM issues - that make the point that even the behavior of the biophysical subsystems themselves are already complex. From the insights into complex systems discussed in the first part of the paper and the lessons emerging from the way these cases have been dealt with in reality, we extract five interrelated generic principles for practicing science and management in complex NRM environments. These principles are then further elucidated using four further South African case studies - organized as two contrasting pairs - and now focusing on the more difficult organizational and social side, comparing the human organizational endeavors in managing such systems.
Bim Automation: Advanced Modeling Generative Process for Complex Structures
Banfi, F.; Fai, S.; Brumana, R.
2017-08-01
The new paradigm of the complexity of modern and historic structures, which are characterised by complex forms, morphological and typological variables, is one of the greatest challenges for building information modelling (BIM). Generation of complex parametric models needs new scientific knowledge concerning new digital technologies. These elements are helpful to store a vast quantity of information during the life cycle of buildings (LCB). The latest developments of parametric applications do not provide advanced tools, resulting in time-consuming work for the generation of models. This paper presents a method capable of processing and creating complex parametric Building Information Models (BIM) with Non-Uniform to NURBS) with multiple levels of details (Mixed and ReverseLoD) based on accurate 3D photogrammetric and laser scanning surveys. Complex 3D elements are converted into parametric BIM software and finite element applications (BIM to FEA) using specific exchange formats and new modelling tools. The proposed approach has been applied to different case studies: the BIM of modern structure for the courtyard of West Block on Parliament Hill in Ottawa (Ontario) and the BIM of Masegra Castel in Sondrio (Italy), encouraging the dissemination and interaction of scientific results without losing information during the generative process.
Normal families of meromorphic mappings of several complex variables into PN(C) for moving targets
TU; Zhenhan; LI; Pingli
2005-01-01
Motivated by Ru and Stoll's accomplishment of the second main theorem in higher dimension with moving targets, many authors studied the moving target problems in value distribution theory and related topics. But thereafter up to the present, all of researches about normality criteria for families of meromorphic mappings of several complex variables into PN(C) have been still restricted to the hyperplane case. In this paper, we prove some normality criteria for families of meromorphic mappings of several complex variables into PN(C) for moving hyperplanes, related to Nochka's Picard-type theorems.The new normality criteria greatly extend earlier related results.
Free particles from Brauer algebras in complex matrix models
Kimura, Yusuke; Turton, David
2009-01-01
The gauge invariant degrees of freedom of matrix models based on an N x N complex matrix, with U(N) gauge symmetry, contain hidden free particle structures. These are exhibited using triangular matrix variables via the Schur decomposition. The Brauer algebra basis for complex matrix models developed earlier is useful in projecting to a sector which matches the state counting of N free fermions on a circle. The Brauer algebra projection is characterized by the vanishing of a scale invariant laplacian constructed from the complex matrix. The special case of N=2 is studied in detail: the ring of gauge invariant functions as well as a ring of scale and gauge invariant differential operators are characterized completely. The orthonormal basis of wavefunctions in this special case is completely characterized by a set of five commuting Hamiltonians, which display free particle structures. Applications to the reduced matrix quantum mechanics coming from radial quantization in N=4 SYM are described. We propose that th...
Fujikawa, Kazuo
2013-01-01
Hidden-variables models are critically reassessed. It is first examined if the quantum discord is classically described by the hidden-variable model of Bell in the Hilbert space with $d=2$. The criterion of vanishing quantum discord is related to the notion of reduction and, surprisingly, the hidden-variable model in $d=2$, which has been believed to be consistent so far, is in fact inconsistent and excluded by the analysis of conditional measurement and reduction. The description of the full contents of quantum discord by the deterministic hidden-variables models is not possible. We also re-examine CHSH inequality. It is shown that the well-known prediction of CHSH inequality $|B|\\leq 2$ for the CHSH operator $B$ introduced by Cirel'son is not unique. This non-uniqueness arises from the failure of linearity condition in the non-contextual hidden-variables model in $d=4$ used by Bell and CHSH, in agreement with Gleason's theorem which excludes $d=4$ non-contextual hidden-variables models. If one imposes the l...
A note on the Dirichlet problem for model complex partial differential equations
Ashyralyev, Allaberen; Karaca, Bahriye
2016-08-01
Complex model partial differential equations of arbitrary order are considered. The uniqueness of the Dirichlet problem is studied. It is proved that the Dirichlet problem for higher order of complex partial differential equations with one complex variable has infinitely many solutions.
A Spline Regression Model for Latent Variables
Harring, Jeffrey R.
2014-01-01
Spline (or piecewise) regression models have been used in the past to account for patterns in observed data that exhibit distinct phases. The changepoint or knot marking the shift from one phase to the other, in many applications, is an unknown parameter to be estimated. As an extension of this framework, this research considers modeling the…
Fox, Eric W; Hill, Ryan A; Leibowitz, Scott G; Olsen, Anthony R; Thornbrugh, Darren J; Weber, Marc H
2017-07-01
Random forest (RF) modeling has emerged as an important statistical learning method in ecology due to its exceptional predictive performance. However, for large and complex ecological data sets, there is limited guidance on variable selection methods for RF modeling. Typically, either a preselected set of predictor variables are used or stepwise procedures are employed which iteratively remove variables according to their importance measures. This paper investigates the application of variable selection methods to RF models for predicting probable biological stream condition. Our motivating data set consists of the good/poor condition of n = 1365 stream survey sites from the 2008/2009 National Rivers and Stream Assessment, and a large set (p = 212) of landscape features from the StreamCat data set as potential predictors. We compare two types of RF models: a full variable set model with all 212 predictors and a reduced variable set model selected using a backward elimination approach. We assess model accuracy using RF's internal out-of-bag estimate, and a cross-validation procedure with validation folds external to the variable selection process. We also assess the stability of the spatial predictions generated by the RF models to changes in the number of predictors and argue that model selection needs to consider both accuracy and stability. The results suggest that RF modeling is robust to the inclusion of many variables of moderate to low importance. We found no substantial improvement in cross-validated accuracy as a result of variable reduction. Moreover, the backward elimination procedure tended to select too few variables and exhibited numerous issues such as upwardly biased out-of-bag accuracy estimates and instabilities in the spatial predictions. We use simulations to further support and generalize results from the analysis of real data. A main purpose of this work is to elucidate issues of model selection bias and instability to ecologists interested in
FLUID BOUNDARY ELEMENT METHOD AND ORTHOGONAL TRANSFORM OF DOUBLE COMPLEX VARIABLES
罗义银
2003-01-01
A concept of orthogonal double function and its complex variables space was putforward. Its corresponding operation rules, the concept of analytic function and conformaltransform are established. And using this concept discussed its foreground for application offluid boundary element method. In results, this concept and special marks may be toenlarge the plane complex into three-dimensional space, and then extensive application maybe obtained in physics and mathematics.
Effect of meditation on scaling behavior and complexity of human heart rate variability
Sarkar, A.; Barat, P.
2006-01-01
The heart beat data recorded from samples before and during meditation are analyzed using two different scaling analysis methods. These analyses revealed that mediation severely affects the long range correlation of heart beat of a normal heart. Moreover, it is found that meditation induces periodic behavior in the heart beat. The complexity of the heart rate variability is quantified using multiscale entropy analysis and recurrence analysis. The complexity of the heart beat during mediation ...
Effect of meditation on scaling behavior and complexity of human heart rate variability
Sarkar, A
2006-01-01
The heart beat data recorded from samples before and during meditation are analyzed using two different scaling analysis methods. These analyses revealed that mediation severely affects the long range correlation of heart beat of a normal heart. Moreover, it is found that meditation induces periodic behavior in the heart beat. The complexity of the heart rate variability is quantified using multiscale entropy analysis and recurrence analysis. The complexity of the heart beat during mediation is found to be more.
Montopoli, Mario; Roberto, Nicoletta; Adirosi, Elisa; Gorgucci, Eugenio; Baldini, Luca
2017-04-01
Weather radars are nowadays a unique tool to estimate quantitatively the rain precipitation near the surface. This is an important task for a plenty of applications. For example, to feed hydrological models, mitigate the impact of severe storms at the ground using radar information in modern warning tools as well as aid the validation studies of satellite-based rain products. With respect to the latter application, several ground validation studies of the Global Precipitation Mission (GPM) products have recently highlighted the importance of accurate QPE from ground-based weather radars. To date, a plenty of works analyzed the performance of various QPE algorithms making use of actual and synthetic experiments, possibly trained by measurement of particle size distributions and electromagnetic models. Most of these studies support the use of dual polarization variables not only to ensure a good level of radar data quality but also as a direct input in the rain estimation equations. Among others, one of the most important limiting factors in radar QPE accuracy is the vertical variability of particle size distribution that affects at different levels, all the radar variables acquired as well as rain rates. This is particularly impactful in mountainous areas where the altitudes of the radar sampling is likely several hundred of meters above the surface. In this work, we analyze the impact of the vertical profile variations of rain precipitation on several dual polarization radar QPE algorithms when they are tested a in complex orography scenario. So far, in weather radar studies, more emphasis has been given to the extrapolation strategies that make use of the signature of the vertical profiles in terms of radar co-polar reflectivity. This may limit the use of the radar vertical profiles when dual polarization QPE algorithms are considered because in that case all the radar variables used in the rain estimation process should be consistently extrapolated at the surface
Trends in modeling Biomedical Complex Systems
Remondini Daniel
2009-10-01
Full Text Available Abstract In this paper we provide an introduction to the techniques for multi-scale complex biological systems, from the single bio-molecule to the cell, combining theoretical modeling, experiments, informatics tools and technologies suitable for biological and biomedical research, which are becoming increasingly multidisciplinary, multidimensional and information-driven. The most important concepts on mathematical modeling methodologies and statistical inference, bioinformatics and standards tools to investigate complex biomedical systems are discussed and the prominent literature useful to both the practitioner and the theoretician are presented.
Kuo, Yi-Ming; Wu, Jiunn-Tzong
2016-12-01
This study was conducted to identify the key factors related to the spatiotemporal variations in phytoplankton abundance in a subtropical reservoir from 2006 to 2010 and to assist in developing strategies for water quality management. Dynamic factor analysis (DFA), a dimension-reduction technique, was used to identify interactions between explanatory variables (i.e., environmental variables) and abundance (biovolume) of predominant phytoplankton classes. The optimal DFA model significantly described the dynamic changes in abundances of predominant phytoplankton groups (including dinoflagellates, diatoms, and green algae) at five monitoring sites. Water temperature, electrical conductivity, water level, nutrients (total phosphorus, NO3-N, and NH3-N), macro-zooplankton, and zooplankton were the key factors affecting the dynamics of aforementioned phytoplankton. Therefore, transformations of nutrients and reactions between water quality variables and aforementioned processes altered by hydrological conditions may also control the abundance dynamics of phytoplankton, which may represent common trends in the DFA model. The meandering shape of Shihmen Reservoir and its surrounding rivers caused a complex interplay between hydrological conditions and abiotic and biotic variables, resulting in phytoplankton abundance that could not be estimated using certain variables. Additional water quality and hydrological variables at surrounding rivers and monitoring plans should be executed a few days before and after reservoir operations and heavy storm, which would assist in developing site-specific preventive strategies to control phytoplankton abundance.
Resolving structural variability in network models and the brain.
Florian Klimm
2014-03-01
Full Text Available Large-scale white matter pathways crisscrossing the cortex create a complex pattern of connectivity that underlies human cognitive function. Generative mechanisms for this architecture have been difficult to identify in part because little is known in general about mechanistic drivers of structured networks. Here we contrast network properties derived from diffusion spectrum imaging data of the human brain with 13 synthetic network models chosen to probe the roles of physical network embedding and temporal network growth. We characterize both the empirical and synthetic networks using familiar graph metrics, but presented here in a more complete statistical form, as scatter plots and distributions, to reveal the full range of variability of each measure across scales in the network. We focus specifically on the degree distribution, degree assortativity, hierarchy, topological Rentian scaling, and topological fractal scaling--in addition to several summary statistics, including the mean clustering coefficient, the shortest path-length, and the network diameter. The models are investigated in a progressive, branching sequence, aimed at capturing different elements thought to be important in the brain, and range from simple random and regular networks, to models that incorporate specific growth rules and constraints. We find that synthetic models that constrain the network nodes to be physically embedded in anatomical brain regions tend to produce distributions that are most similar to the corresponding measurements for the brain. We also find that network models hardcoded to display one network property (e.g., assortativity do not in general simultaneously display a second (e.g., hierarchy. This relative independence of network properties suggests that multiple neurobiological mechanisms might be at play in the development of human brain network architecture. Together, the network models that we develop and employ provide a potentially useful
Optimal Use Of Spatial Variability And Complex Dynamics In Hydrology: Are We There Yet?
Foufoula-Georgiou, Efi
Hydrologic Science has witnessed significant technological advances over the last decade: many more observations have become available from remote sensors and computer power has quadrupled. One would expect then, that advances in hydrologic understanding and prediction accuracy have been commensurate with these develop- ments. But is this really so? Have we used the information empowerment with the same ingenuity as our predecessors showed to compensate for lack of information? They advanced conceptual modeling, lumped parameterization and calibration tech- niques to improve predictions. Have we, in turn, advanced our methodologies enough to deal with detailed information of process variability, interaction of processes across scales, complexity, and uncertainties? And equally important, do we have adequate methodologies to judge the degree of our progress? Hydrologic Science is now at the forefront of Earth Sciences. There are pressing questions for the new generation to ad- dress and new approaches to learning from data, modeling and assessing predictability might be in order. This talk will address some of these issues.
Characteristic Polynomials of Complex Random Matrix Models
Akemann, G
2003-01-01
We calculate the expectation value of an arbitrary product of characteristic polynomials of complex random matrices and their hermitian conjugates. Using the technique of orthogonal polynomials in the complex plane our result can be written in terms of a determinant containing these polynomials and their kernel. It generalizes the known expression for hermitian matrices and it also provides a generalization of the Christoffel formula to the complex plane. The derivation we present holds for complex matrix models with a general weight function at finite-N, where N is the size of the matrix. We give some explicit examples at finite-N for specific weight functions. The characteristic polynomials in the large-N limit at weak and strong non-hermiticity follow easily and they are universal in the weak limit. We also comment on the issue of the BMN large-N limit.
Modeling, estimation and identification of stochastic systems with latent variables
Bottegal, Giulio
2013-01-01
The main topic of this thesis is the analysis of static and dynamic models in which some variables, although directly influencing the behavior of certain observables, are not accessible to measurements. These models find applications in many branches of science and engineering, such as control systems, communications, natural and biological sciences and econometrics. It is well-known that models with unaccessible - or latent - variables, usually suffer from a lack of uniqueness of representat...
Fuwa, Minori; Kayama, Mizue; Kunimune, Hisayoshi; Hashimoto, Masami; Asano, David K.
2015-01-01
We have explored educational methods for algorithmic thinking for novices and implemented a block programming editor and a simple learning management system. In this paper, we propose a program/algorithm complexity metric specified for novice learners. This metric is based on the variable usage in arithmetic and relational formulas in learner's…
SHANNON SAMPLING AND ESTIMATION OF BAND-LIMITED FUNCTIONS IN THE SEVERAL COMPLEX VARIABLES SETTING
Kou Kit-Ian; Qian Tao
2005-01-01
In this work the authors develop the n-dimensional sinc function theory in the several complex variables setting. In terms of the corresponding Paley-Wiener theorem the exact sinc interpolation and quadrature are established. Exponential convergence rate of the error estimates for band-limited functions in n-dimensional strips are obtained.
Complex Variables in Junior High School: The Role and Potential Impact of an Outreach Mathematician
Duke, Billy J.; Dwyer, Jerry F.; Wilhelm, Jennifer; Moskal, Barbara
2008-01-01
Outreach mathematicians are college faculty who are trained in mathematics but who undertake an active role in improving primary and secondary education. This role is examined through a study where an outreach mathematician introduced the concept of complex variables to junior high school students in the United States with the goal of stimulating…
Study of multiple cracks in airplane fuselage by micromechanics and complex variables
Denda, Mitsunori; Dong, Y. F.
1994-01-01
Innovative numerical techniques for two dimensional elastic and elastic-plastic multiple crack problems are presented using micromechanics concepts and complex variables. The simplicity and the accuracy of the proposed method will enable us to carry out the multiple-site fatigue crack propagation analyses for airplane fuselage by incorporating such features as the curvilinear crack path, plastic deformation, coalescence of cracks, etc.
Enumeration of Combinatorial Classes of Single Variable Complex Polynomial Vector Fields
Dias, Kealey
A vector field in the space of degree d monic, centered single variable complex polynomial vector fields has a combinatorial structure which can be fully described by a combinatorial data set consisting of an equivalence relation and a marked subset on the integers mod 2d-2, satisfying certain...
Complex Behaviors of a Simple Traffic Model
GAO Xing-Ru
2006-01-01
In this paper, we propose a modified traffic model in which a single car moves through a sequence of traffic lights controlled by a step function instead of a sine function. In contrast to the previous work [Phys. Rev. E 70 (2004)016107], we have investigated in detail the dependence of the behavior on four parameters, ω, α, η, and a1, and given three kinds of bifurcation diagrams, which show three kinds of complex behaviors. We have found that in this model there are chaotic and complex periodic motions, as well as special singularities. We have also analyzed the characteristic of the complex period motion and the essential feature of the singularity.
Effect of Flux Adjustments on Temperature Variability in Climate Models
Duffy, P.; Bell, J.; Covey, C.; Sloan, L.
1999-12-27
It has been suggested that ''flux adjustments'' in climate models suppress simulated temperature variability. If true, this might invalidate the conclusion that at least some of observed temperature increases since 1860 are anthropogenic, since this conclusion is based in part on estimates of natural temperature variability derived from flux-adjusted models. We assess variability of surface air temperatures in 17 simulations of internal temperature variability submitted to the Coupled Model Intercomparison Project. By comparing variability in flux-adjusted vs. non-flux adjusted simulations, we find no evidence that flux adjustments suppress temperature variability in climate models; other, largely unknown, factors are much more important in determining simulated temperature variability. Therefore the conclusion that at least some of observed temperature increases are anthropogenic cannot be questioned on the grounds that it is based in part on results of flux-adjusted models. Also, reducing or eliminating flux adjustments would probably do little to improve simulations of temperature variability.
Complex Systems and Self-organization Modelling
Bertelle, Cyrille; Kadri-Dahmani, Hakima
2009-01-01
The concern of this book is the use of emergent computing and self-organization modelling within various applications of complex systems. The authors focus their attention both on the innovative concepts and implementations in order to model self-organizations, but also on the relevant applicative domains in which they can be used efficiently. This book is the outcome of a workshop meeting within ESM 2006 (Eurosis), held in Toulouse, France in October 2006.
A cognitive model for software architecture complexity
Bouwers, E.; Lilienthal, C.; Visser, J.; Van Deursen, A.
2010-01-01
Evaluating the complexity of the architecture of a softwaresystem is a difficult task. Many aspects have to be considered to come to a balanced assessment. Several architecture evaluation methods have been proposed, but very few define a quality model to be used during the evaluation process. In add
The Kuramoto model in complex networks
Rodrigues, Francisco A.; Peron, Thomas K. DM.; Ji, Peng; Kurths, Jürgen
2016-01-01
Synchronization of an ensemble of oscillators is an emergent phenomenon present in several complex systems, ranging from social and physical to biological and technological systems. The most successful approach to describe how coherent behavior emerges in these complex systems is given by the paradigmatic Kuramoto model. This model has been traditionally studied in complete graphs. However, besides being intrinsically dynamical, complex systems present very heterogeneous structure, which can be represented as complex networks. This report is dedicated to review main contributions in the field of synchronization in networks of Kuramoto oscillators. In particular, we provide an overview of the impact of network patterns on the local and global dynamics of coupled phase oscillators. We cover many relevant topics, which encompass a description of the most used analytical approaches and the analysis of several numerical results. Furthermore, we discuss recent developments on variations of the Kuramoto model in networks, including the presence of noise and inertia. The rich potential for applications is discussed for special fields in engineering, neuroscience, physics and Earth science. Finally, we conclude by discussing problems that remain open after the last decade of intensive research on the Kuramoto model and point out some promising directions for future research.
Reduced Complexity Channel Models for IMT-Advanced Evaluation
Yu Zhang
2009-01-01
Full Text Available Accuracy and complexity are two crucial aspects of the applicability of a channel model for wideband multiple input multiple output (MIMO systems. For small number of antenna element pairs, correlation-based models have lower computational complexity while the geometry-based stochastic models (GBSMs can provide more accurate modeling of real radio propagation. This paper investigates several potential simplifications of the GBSM to reduce the complexity with minimal impact on accuracy. In addition, we develop a set of broadband metrics which enable a thorough investigation of the differences between the GBSMs and the simplified models. The impact of various random variables which are employed by the original GBSM on the system level simulation are also studied. Both simulation results and a measurement campaign show that complexity can be reduced significantly with a negligible loss of accuracy in the proposed metrics. As an example, in the presented scenarios, the computational time can be reduced by up to 57% while keeping the relative deviation of 5% outage capacity within 5%.
Complexity, accuracy and practical applicability of different biogeochemical model versions
Los, F. J.; Blaas, M.
2010-04-01
The construction of validated biogeochemical model applications as prognostic tools for the marine environment involves a large number of choices particularly with respect to the level of details of the .physical, chemical and biological aspects. Generally speaking, enhanced complexity might enhance veracity, accuracy and credibility. However, very complex models are not necessarily effective or efficient forecast tools. In this paper, models of varying degrees of complexity are evaluated with respect to their forecast skills. In total 11 biogeochemical model variants have been considered based on four different horizontal grids. The applications vary in spatial resolution, in vertical resolution (2DH versus 3D), in nature of transport, in turbidity and in the number of phytoplankton species. Included models range from 15 year old applications with relatively simple physics up to present state of the art 3D models. With all applications the same year, 2003, has been simulated. During the model intercomparison it has been noticed that the 'OSPAR' Goodness of Fit cost function (Villars and de Vries, 1998) leads to insufficient discrimination of different models. This results in models obtaining similar scores although closer inspection of the results reveals large differences. In this paper therefore, we have adopted the target diagram by Jolliff et al. (2008) which provides a concise and more contrasting picture of model skill on the entire model domain and for the entire period of the simulations. Correctness in prediction of the mean and the variability are separated and thus enhance insight in model functioning. Using the target diagrams it is demonstrated that recent models are more consistent and have smaller biases. Graphical inspection of time series confirms this, as the level of variability appears more realistic, also given the multi-annual background statistics of the observations. Nevertheless, whether the improvements are all genuine for the particular
Modelling biological complexity: a physical scientist's perspective.
Coveney, Peter V; Fowler, Philip W
2005-09-22
We discuss the modern approaches of complexity and self-organization to understanding dynamical systems and how these concepts can inform current interest in systems biology. From the perspective of a physical scientist, it is especially interesting to examine how the differing weights given to philosophies of science in the physical and biological sciences impact the application of the study of complexity. We briefly describe how the dynamics of the heart and circadian rhythms, canonical examples of systems biology, are modelled by sets of nonlinear coupled differential equations, which have to be solved numerically. A major difficulty with this approach is that all the parameters within these equations are not usually known. Coupled models that include biomolecular detail could help solve this problem. Coupling models across large ranges of length- and time-scales is central to describing complex systems and therefore to biology. Such coupling may be performed in at least two different ways, which we refer to as hierarchical and hybrid multiscale modelling. While limited progress has been made in the former case, the latter is only beginning to be addressed systematically. These modelling methods are expected to bring numerous benefits to biology, for example, the properties of a system could be studied over a wider range of length- and time-scales, a key aim of systems biology. Multiscale models couple behaviour at the molecular biological level to that at the cellular level, thereby providing a route for calculating many unknown parameters as well as investigating the effects at, for example, the cellular level, of small changes at the biomolecular level, such as a genetic mutation or the presence of a drug. The modelling and simulation of biomolecular systems is itself very computationally intensive; we describe a recently developed hybrid continuum-molecular model, HybridMD, and its associated molecular insertion algorithm, which point the way towards the
A Polynomial Term Structure Model with Macroeconomic Variables
José Valentim Vicente
2007-06-01
Full Text Available Recently, a myriad of factor models including macroeconomic variables have been proposed to analyze the yield curve. We present an alternative factor model where term structure movements are captured by Legendre polynomials mimicking the statistical factor movements identified by Litterman e Scheinkmam (1991. We estimate the model with Brazilian Foreign Exchange Coupon data, adopting a Kalman filter, under two versions: the first uses only latent factors and the second includes macroeconomic variables. We study its ability to predict out-of-sample term structure movements, when compared to a random walk. We also discuss results on the impulse response function of macroeconomic variables.
Gaussian Process Structural Equation Models with Latent Variables
Silva, Ricardo
2010-01-01
In a variety of disciplines such as social sciences, psychology, medicine and economics, the recorded data are considered to be noisy measurements of latent variables connected by some causal structure. This corresponds to a family of graphical models known as the structural equation model with latent variables. While linear non-Gaussian variants have been well-studied, inference in nonparametric structural equation models is still underdeveloped. We introduce a sparse Gaussian process parameterization that defines a non-linear structure connecting latent variables, unlike common formulations of Gaussian process latent variable models. An efficient Markov chain Monte Carlo procedure is described. We evaluate the stability of the sampling procedure and the predictive ability of the model compared against the current practice.
Modelling and forecasting electricity price variability
Haugom, Erik
2012-07-01
The liberalization of electricity sectors around the world has induced a need for financial electricity markets. This thesis is mainly focused on calculating, modelling, and predicting volatility for financial electricity prices. The four first essays examine the liberalized Nordic electricity market. The purposes in these papers are to describe some stylized properties of high-frequency financial electricity data and to apply models that can explain and predict variation in volatility. The fifth essay examines how information from high-frequency electricity forward contracts can be used in order to improve electricity spot-price volatility predictions. This essay uses data from the Pennsylvania-New Jersey-Maryland wholesale electricity market in the U.S.A. Essay 1 describes some stylized properties of financial high-frequency electricity prices, their returns and volatilities at the Nordic electricity exchange, Nord Pool. The analyses focus on distribution properties, serial correlation, volatility clustering, the influence of extreme events and seasonality in the various measures. The objective of Essay 2 is to calculate, model, and predict realized volatility of financial electricity prices for quarterly and yearly contracts. The total variation is also separated into continuous and jump variation. Various market measures are also included in the models in order potentially to improve volatility predictions. Essay 3 compares day-ahead predictions of Nord Pool financial electricity price volatility obtained from a GARCH approach with those obtained using standard time-series techniques on realized volatility. The performances of a total of eight models (two representing the GARCH family and six representing standard autoregressive models) are compared and evaluated. Essay 4 examines whether predictions of day-ahead and week-ahead volatility can be improved by additionally including volatility and covariance effects from related financial electricity contracts
Nuryanto, Danang Eko
2016-02-01
The spatiotemporal patterns of Indonesian Maritime Continent (IMC) convective activity was documented by using two different variables i.e. cloud and wind datasets. In this study, a Complex Empirical Orthogonal Function (CEOF) was used to combining that variables. This method was applied to representing the land-sea-atmosphere interaction of diurnal convective activity in IMC. This study used pseudo-vector to define complex signals from convective index (cloud) as complex part and convergence (wind) as real part. The results showed that the phase patterns of CEOF were more consistent than those of pseudo-vector. Both CEOF1 and CEOF2 have shown semi-annual and annual cycles, respectively. Spatially, CEOF1 represents common patterns, whereas CEOF2 was more toward local patterns and tends to be in random.
Fractional Langevin model of gait variability
Latka Miroslaw
2005-08-01
Full Text Available Abstract The stride interval in healthy human gait fluctuates from step to step in a random manner and scaling of the interstride interval time series motivated previous investigators to conclude that this time series is fractal. Early studies suggested that gait is a monofractal process, but more recent work indicates the time series is weakly multifractal. Herein we present additional evidence for the weakly multifractal nature of gait. We use the stride interval time series obtained from ten healthy adults walking at a normal relaxed pace for approximately fifteen minutes each as our data set. A fractional Langevin equation is constructed to model the underlying motor control system in which the order of the fractional derivative is itself a stochastic quantity. Using this model we find the fractal dimension for each of the ten data sets to be in agreement with earlier analyses. However, with the present model we are able to draw additional conclusions regarding the nature of the control system guiding walking. The analysis presented herein suggests that the observed scaling in interstride interval data may not be due to long-term memory alone, but may, in fact, be due partly to the statistics.
Modeling and design of energy efficient variable stiffness actuators
Visser, L.C.; Carloni, Raffaella; Ünal, Ramazan; Stramigioli, Stefano
In this paper, we provide a port-based mathematical framework for analyzing and modeling variable stiffness actuators. The framework provides important insights in the energy requirements and, therefore, it is an important tool for the design of energy efficient variable stiffness actuators. Based
A model for variability design rationale in SPL
Galvao, I.; van den Broek, P.M.; Aksit, Mehmet
2010-01-01
The management of variability in software product lines goes beyond the definition of variations, traceability and configurations. It involves a lot of assumptions about the variability and related models, which are made by the stakeholders all over the product line but almost never handled explicit
Variable Selection in the Partially Linear Errors-in-Variables Models for Longitudinal Data
Yi-ping YANG; Liu-gen XUE; Wei-hu CHENG
2012-01-01
This paper proposes a new approach for variable selection in partially linear errors-in-variables (EV) models for longitudinal data by penalizing appropriate estimating functions.We apply the SCAD penalty to simultaneously select significant variables and estimate unknown parameters.The rate of convergence and the asymptotic normality of the resulting estimators are established.Furthermore,with proper choice of regularization parameters,we show that the proposed estimators perform as well as the oracle procedure.A new algorithm is proposed for solving penalized estimating equation.The asymptotic results are augmented by a simulation study.
Modeling Candle Flame Behavior In Variable Gravity
Alsairafi, A.; Tien, J. S.; Lee, S. T.; Dietrich, D. L.; Ross, H. D.
2003-01-01
The burning of a candle, as typical non-propagating diffusion flame, has been used by a number of researchers to study the effects of electric fields on flame, spontaneous flame oscillation and flickering phenomena, and flame extinction. In normal gravity, the heat released from combustion creates buoyant convection that draws oxygen into the flame. The strength of the buoyant flow depends on the gravitational level and it is expected that the flame shape, size and candle burning rate will vary with gravity. Experimentally, there exist studies of candle burning in enhanced gravity (i.e. higher than normal earth gravity, g(sub e)), and in microgravity in drop towers and space-based facilities. There are, however, no reported experimental data on candle burning in partial gravity (g model of the candle flame, buoyant forces were neglected. The treatment of momentum equation was simplified using a potential flow approximation. Although the predicted flame characteristics agreed well with the experimental results, the model cannot be extended to cases with buoyant flows. In addition, because of the use of potential flow, no-slip boundary condition is not satisfied on the wick surface. So there is some uncertainty on the accuracy of the predicted flow field. In the present modeling effort, the full Navier-Stokes momentum equations with body force term is included. This enables us to study the effect of gravity on candle flames (with zero gravity as the limiting case). In addition, we consider radiation effects in more detail by solving the radiation transfer equation. In the previous study, flame radiation is treated as a simple loss term in the energy equation. Emphasis of the present model is on the gas-phase processes. Therefore, the detailed heat and mass transfer phenomena inside the porous wick are not treated. Instead, it is assumed that a thin layer of liquid fuel coated the entire wick surface during the burning process. This is the limiting case that the mass
Multi-wheat-model ensemble responses to interannual climatic variability
Ruane, A C; Hudson, N I; Asseng, S
2016-01-01
evaluate results against the interannual variability of growing season temperature, precipitation, and solar radiation. The amount of information used for calibration has only a minor effect on most models' climate response, and even small multi-model ensembles prove beneficial. Wheat model clusters reveal...... common characteristics of yield response to climate; however models rarely share the same cluster at all four sites indicating substantial independence. Only a weak relationship (R2 ≤ 0.24) was found between the models' sensitivities to interannual temperature variability and their response to long...
Delineating Parameter Unidentifiabilities in Complex Models
Raman, Dhruva V; Papachristodoulou, Antonis
2016-01-01
Scientists use mathematical modelling to understand and predict the properties of complex physical systems. In highly parameterised models there often exist relationships between parameters over which model predictions are identical, or nearly so. These are known as structural or practical unidentifiabilities, respectively. They are hard to diagnose and make reliable parameter estimation from data impossible. They furthermore imply the existence of an underlying model simplification. We describe a scalable method for detecting unidentifiabilities, and the functional relations defining them, for generic models. This allows for model simplification, and appreciation of which parameters (or functions thereof) cannot be estimated from data. Our algorithm can identify features such as redundant mechanisms and fast timescale subsystems, as well as the regimes in which such approximations are valid. We base our algorithm on a novel quantification of regional parametric sensitivity: multiscale sloppiness. Traditional...
Computing the complexity for Schelling segregation models
Gerhold, Stefan; Glebsky, Lev; Schneider, Carsten; Weiss, Howard; Zimmermann, Burkhard
2008-12-01
The Schelling segregation models are "agent based" population models, where individual members of the population (agents) interact directly with other agents and move in space and time. In this note we study one-dimensional Schelling population models as finite dynamical systems. We define a natural notion of entropy which measures the complexity of the family of these dynamical systems. The entropy counts the asymptotic growth rate of the number of limit states. We find formulas and deduce precise asymptotics for the number of limit states, which enable us to explicitly compute the entropy.
A new approach to model the variability of karstic recharge
A. Hartmann
2012-02-01
Full Text Available In karst systems, surface near dissolution carbonate rock results in a high spatial and temporal variability of groundwater recharge. To adequately represent the dominating recharge processes in hydrological models is still a challenge, especially in data scare regions. In this study, we developed a recharge model that is based on a perceptual model of the epikarst. It represents epikarst heterogeneity as a set of system property distributions to produce not only a single recharge time series, but a variety of time series representing the spatial recharge variability. We tested the new model with a unique set of spatially distributed flow and tracer observations in a karstic cave at Mt. Carmel, Israel. We transformed the spatial variability into statistical variables and apply an iterative calibration strategy in which more and more data was added to the calibration. Thereby, we could show that the model is only able to produce realistic results when the information about the spatial variability of the observations was included into the model calibration. We could also show that tracer information improves the model performance if data about the variability is not included.
Geometrically nonlinear creeping mathematic models of shells with variable thickness
V.M. Zhgoutov
2012-08-01
Full Text Available Calculations of strength, stability and vibration of shell structures play an important role in the design of modern devices machines and structures. However, the behavior of thin-walled structures of variable thickness during which geometric nonlinearity, lateral shifts, viscoelasticity (creep of the material, the variability of the profile take place and thermal deformation starts up is not studied enough.In this paper the mathematical deformation models of variable thickness shells (smoothly variable and ribbed shells, experiencing either mechanical load or permanent temperature field and taking into account the geometrical nonlinearity, creeping and transverse shear, were developed. The refined geometrical proportions for geometrically nonlinear and steadiness problems are given.
Boolean Variables in Economic Models Solved by Linear Programming
Lixandroiu D.
2014-12-01
Full Text Available The article analyses the use of logical variables in economic models solved by linear programming. Focus is given to the presentation of the way logical constraints are obtained and of the definition rules based on predicate logic. Emphasis is also put on the possibility to use logical variables in constructing a linear objective function on intervals. Such functions are encountered when costs or unitary receipts are different on disjunct intervals of production volumes achieved or sold. Other uses of Boolean variables are connected to constraint systems with conditions and the case of a variable which takes values from a finite set of integers.
Using structural equation modeling to investigate relationships among ecological variables
Malaeb, Z.A.; Kevin, Summers J.; Pugesek, B.H.
2000-01-01
Structural equation modeling is an advanced multivariate statistical process with which a researcher can construct theoretical concepts, test their measurement reliability, hypothesize and test a theory about their relationships, take into account measurement errors, and consider both direct and indirect effects of variables on one another. Latent variables are theoretical concepts that unite phenomena under a single term, e.g., ecosystem health, environmental condition, and pollution (Bollen, 1989). Latent variables are not measured directly but can be expressed in terms of one or more directly measurable variables called indicators. For some researchers, defining, constructing, and examining the validity of latent variables may be the end task of itself. For others, testing hypothesized relationships of latent variables may be of interest. We analyzed the correlation matrix of eleven environmental variables from the U.S. Environmental Protection Agency's (USEPA) Environmental Monitoring and Assessment Program for Estuaries (EMAP-E) using methods of structural equation modeling. We hypothesized and tested a conceptual model to characterize the interdependencies between four latent variables-sediment contamination, natural variability, biodiversity, and growth potential. In particular, we were interested in measuring the direct, indirect, and total effects of sediment contamination and natural variability on biodiversity and growth potential. The model fit the data well and accounted for 81% of the variability in biodiversity and 69% of the variability in growth potential. It revealed a positive total effect of natural variability on growth potential that otherwise would have been judged negative had we not considered indirect effects. That is, natural variability had a negative direct effect on growth potential of magnitude -0.3251 and a positive indirect effect mediated through biodiversity of magnitude 0.4509, yielding a net positive total effect of 0
Nonlinear combined forecasting model based on fuzzy adaptive variable weight and its application
JIANG Ai-hua; MEI Chi; E Jia-qiang; SHI Zhang-ming
2010-01-01
In order to enhance forecasting precision of problems about nonlinear time series in a complex industry system,a new nonlinear fuzzy adaptive variable weight combined forecasting model was established by using conceptions of the relative error,the change tendency of the forecasted object,gray basic weight and adaptive control coefficient on the basis of the method of fuzzy variable weight.Based on Visual Basic 6.0 platform,a fuzzy adaptive variable weight combined forecasting and management system was developed.The application results reveal that the forecasting precisions from the new nonlinear combined forecasting model are higher than those of other single combined forecasting models and the combined forecasting and management system is very powerful tool for the required decision in complex industry system.
Estimation in the polynomial errors-in-variables model
无
2002-01-01
Estimators are presented for the coefficients of the polynomial errors-in-variables (EV) model when replicated observations are taken at some experimental points. These estimators are shown to be strongly consistent under mild conditions.
Stochastic simulation of fluid flow in porous media by the complex variable expression method
SONG Hui-bin; ZHAN Mei-li; SHENG Jin-chang; LUO Yu-long
2013-01-01
A stochastic simulation of fluid flow in porous media using a complex variable expression method (SFCM) is presented in this paper.Hydraulic conductivity is considered as a random variable and is then expressed in complex variable form,the real part of which is a deterministic value and the imaginary part is a variable value.The stochastic seepage flow is simulated with the SFCM and is compared with the results calculated with the Monte Carlo stochastic finite element method.In using the Monte Carlo method to simulate the stochastic seepage flow field,the hydraulic conductivity is assumed in three different probability distributions using random sampling method.The obtained seepage flow field is examined through skewness analysis,and the skewed distribution probability density function is given.The head mode value and the head comprehensive standard deviation are used to represent the sta-tistics of calculation results obtained by the Monte Carlo method.The stochastic seepage flow field simulated by the SFCM is confirmed to be similar to that given by the Monte Carlo method from numerical aspects.The range of coefficient of variation of hydraulic conductivity in SFCM is larger than used previously in stochastic seepage flow field simulations,and the computation time is short.The results proved that the SFCM is a convenient calculating method for solving the complex problems.
Bayesian Network Models for Local Dependence among Observable Outcome Variables
Almond, Russell G.; Mulder, Joris; Hemat, Lisa A.; Yan, Duanli
2009-01-01
Bayesian network models offer a large degree of flexibility for modeling dependence among observables (item outcome variables) from the same task, which may be dependent. This article explores four design patterns for modeling locally dependent observations: (a) no context--ignores dependence among observables; (b) compensatory context--introduces…
The relationship between cub and loglinear models with latent variables
Oberski, D. L.; Vermunt, J. K.
2015-01-01
The "combination of uniform and shifted binomial"(cub) model is a distribution for ordinal variables that has received considerable recent attention and specialized development. This article notes that the cub model is a special case of the well-known loglinear latent class model, an observation tha
Multi-wheat-model ensemble responses to interannual climate variability
Ruane, Alex C.; Hudson, Nicholas I.; Asseng, Senthold; Camarrano, Davide; Ewert, Frank; Martre, Pierre; Boote, Kenneth J.; Thorburn, Peter J.; Aggarwal, Pramod K.; Angulo, Carlos; Basso, Bruno; Bertuzzi, Patrick; Biernath, Christian; Brisson, Nadine; Challinor, Andrew J.; Doltra, Jordi; Gayler, Sebastian; Goldberg, Richard; Grant, Robert F.; Heng, Lee; Hooker, Josh; Hunt, Leslie A.; Ingwersen, Joachim; Izaurralde, Roberto C.; Kersebaum, Kurt Christian; Kumar, Soora Naresh; Müller, Christoph; Nendel, Claas; O'Leary, Garry; Olesen, Jørgen E.; Osborne, Tom M.; Palosuo, Taru; Priesack, Eckart; Ripoche, Dominique; Rötter, Reimund P.; Semenov, Mikhail A.; Shcherbak, Iurii; Steduto, Pasquale; Stöckle, Claudio O.; Stratonovitch, Pierre; Streck, Thilo; Supit, Iwan; Tao, Fulu; Travasso, Maria; Waha, Katharina; Wallach, Daniel; White, Jeffrey W.; Wolf, Joost
2016-01-01
We compare 27 wheat models' yield responses to interannual climate variability, analyzed at locations in Argentina, Australia, India, and The Netherlands as part of the Agricultural Model Intercomparison and Improvement Project (AgMIP) Wheat Pilot. Each model simulated 1981-2010 grain yield, and
A Non-Gaussian Spatial Generalized Linear Latent Variable Model
Irincheeva, Irina
2012-08-03
We consider a spatial generalized linear latent variable model with and without normality distributional assumption on the latent variables. When the latent variables are assumed to be multivariate normal, we apply a Laplace approximation. To relax the assumption of marginal normality in favor of a mixture of normals, we construct a multivariate density with Gaussian spatial dependence and given multivariate margins. We use the pairwise likelihood to estimate the corresponding spatial generalized linear latent variable model. The properties of the resulting estimators are explored by simulations. In the analysis of an air pollution data set the proposed methodology uncovers weather conditions to be a more important source of variability than air pollution in explaining all the causes of non-accidental mortality excluding accidents. © 2012 International Biometric Society.
Total Variability Modeling using Source-specific Priors
Shepstone, Sven Ewan; Lee, Kong Aik; Li, Haizhou
2016-01-01
In total variability modeling, variable length speech utterances are mapped to fixed low-dimensional i-vectors. Central to computing the total variability matrix and i-vector extraction, is the computation of the posterior distribution for a latent variable conditioned on an observed feature...... sequence of an utterance. In both cases the prior for the latent variable is assumed to be non-informative, since for homogeneous datasets there is no gain in generality in using an informative prior. This work shows in the heterogeneous case, that using informative priors for com- puting the posterior......, can lead to favorable results. We focus on modeling the priors using minimum divergence criterion or fac- tor analysis techniques. Tests on the NIST 2008 and 2010 Speaker Recognition Evaluation (SRE) dataset show that our proposed method beats four baselines: For i-vector extraction using an already...
Grace, J.B.; Bollen, K.A.
2008-01-01
Structural equation modeling (SEM) holds the promise of providing natural scientists the capacity to evaluate complex multivariate hypotheses about ecological systems. Building on its predecessors, path analysis and factor analysis, SEM allows for the incorporation of both observed and unobserved (latent) variables into theoretically-based probabilistic models. In this paper we discuss the interface between theory and data in SEM and the use of an additional variable type, the composite. In simple terms, composite variables specify the influences of collections of other variables and can be helpful in modeling heterogeneous concepts of the sort commonly of interest to ecologists. While long recognized as a potentially important element of SEM, composite variables have received very limited use, in part because of a lack of theoretical consideration, but also because of difficulties that arise in parameter estimation when using conventional solution procedures. In this paper we present a framework for discussing composites and demonstrate how the use of partially-reduced-form models can help to overcome some of the parameter estimation and evaluation problems associated with models containing composites. Diagnostic procedures for evaluating the most appropriate and effective use of composites are illustrated with an example from the ecological literature. It is argued that an ability to incorporate composite variables into structural equation models may be particularly valuable in the study of natural systems, where concepts are frequently multifaceted and the influence of suites of variables are often of interest. ?? Springer Science+Business Media, LLC 2007.
A variable-order fractal derivative model for anomalous diffusion
Liu Xiaoting
2017-01-01
Full Text Available This paper pays attention to develop a variable-order fractal derivative model for anomalous diffusion. Previous investigations have indicated that the medium structure, fractal dimension or porosity may change with time or space during solute transport processes, results in time or spatial dependent anomalous diffusion phenomena. Hereby, this study makes an attempt to introduce a variable-order fractal derivative diffusion model, in which the index of fractal derivative depends on temporal moment or spatial position, to characterize the above mentioned anomalous diffusion (or transport processes. Compared with other models, the main advantages in description and the physical explanation of new model are explored by numerical simulation. Further discussions on the dissimilitude such as computational efficiency, diffusion behavior and heavy tail phenomena of the new model and variable-order fractional derivative model are also offered.
Instrumental Variable Bayesian Model Averaging via Conditional Bayes Factors
Karl, Anna; Lenkoski, Alex
2012-01-01
We develop a method to perform model averaging in two-stage linear regression systems subject to endogeneity. Our method extends an existing Gibbs sampler for instrumental variables to incorporate a component of model uncertainty. Direct evaluation of model probabilities is intractable in this setting. We show that by nesting model moves inside the Gibbs sampler, model comparison can be performed via conditional Bayes factors, leading to straightforward calculations. This new Gibbs sampler is...
Modeling and Simulation For A Variable Sprayerrate System
Shi, Yan; Liang, Anbo; Yuan, Haibo; Zhang, Chunmei; Li, Junlong
Variable spraying technology is an important content and developing direction in current plant protection machinery, which can effectively save pesticide and lighten burden of ecological environment in agriculture according to characteristic of spraying targets and speed of aircraft crew. Paper established mathematic model and delivery function of variable spraying system based on designed hardware of variable spraying machine, making use of PID controlling algorithm to simulate in MATLAB. Simulating result explained that the model can conveniently control gushing amounts and can arrive at satisfied controlling.
Modeling the variability of firing rate of retinal ganglion cells.
Levine, M W
1992-12-01
Impulse trains simulating the maintained discharges of retinal ganglion cells were generated by digital realizations of the integrate-and-fire model. If the mean rate were set by a "bias" level added to "noise," the variability of firing would be related to the mean firing rate as an inverse square root law; the maintained discharges of retinal ganglion cells deviate systematically from such a relationship. A more realistic relationship can be obtained if the integrate-and-fire mechanism is "leaky"; with this refinement, the integrate-and-fire model captures the essential features of the data. However, the model shows that the distribution of intervals is insensitive to that of the underlying variability. The leakage time constant, threshold, and distribution of the noise are confounded, rendering the model unspecifiable. Another aspect of variability is presented by the variance of responses to repeated discrete stimuli. The variance of response rate increases with the mean response amplitude; the nature of that relationship depends on the duration of the periods in which the response is sampled. These results have defied explanation. But if it is assumed that variability depends on mean rate in the way observed for maintained discharges, the variability of responses to abrupt changes in lighting can be predicted from the observed mean responses. The parameters that provide the best fits for the variability of responses also provide a reasonable fit to the variability of maintained discharges.
Using Enthalpy as a Prognostic Variable in Atmospheric Modelling with Variable Composition
2016-04-14
tories, and the equation of state p = ∑ i pi = ∑ i ρiRiT = ρRT . (4) Here Ri = kB/mi are individual gas constants for each species and kB is the...relation between the mass, pressure, and temperature fields via the equation of state (4). The use of virtual temperature in Equation (11) implies that...internal energy equation as a convenient prognostic thermodynamic variable for atmospheric modelling with variable composition, including models of
A Practical Philosophy of Complex Climate Modelling
Schmidt, Gavin A.; Sherwood, Steven
2014-01-01
We give an overview of the practice of developing and using complex climate models, as seen from experiences in a major climate modelling center and through participation in the Coupled Model Intercomparison Project (CMIP).We discuss the construction and calibration of models; their evaluation, especially through use of out-of-sample tests; and their exploitation in multi-model ensembles to identify biases and make predictions. We stress that adequacy or utility of climate models is best assessed via their skill against more naive predictions. The framework we use for making inferences about reality using simulations is naturally Bayesian (in an informal sense), and has many points of contact with more familiar examples of scientific epistemology. While the use of complex simulations in science is a development that changes much in how science is done in practice, we argue that the concepts being applied fit very much into traditional practices of the scientific method, albeit those more often associated with laboratory work.
A Practical Philosophy of Complex Climate Modelling
Schmidt, Gavin A.; Sherwood, Steven
2014-01-01
We give an overview of the practice of developing and using complex climate models, as seen from experiences in a major climate modelling center and through participation in the Coupled Model Intercomparison Project (CMIP).We discuss the construction and calibration of models; their evaluation, especially through use of out-of-sample tests; and their exploitation in multi-model ensembles to identify biases and make predictions. We stress that adequacy or utility of climate models is best assessed via their skill against more naive predictions. The framework we use for making inferences about reality using simulations is naturally Bayesian (in an informal sense), and has many points of contact with more familiar examples of scientific epistemology. While the use of complex simulations in science is a development that changes much in how science is done in practice, we argue that the concepts being applied fit very much into traditional practices of the scientific method, albeit those more often associated with laboratory work.
Intrinsic Uncertainties in Modeling Complex Systems.
Cooper, Curtis S; Bramson, Aaron L.; Ames, Arlo L.
2014-09-01
Models are built to understand and predict the behaviors of both natural and artificial systems. Because it is always necessary to abstract away aspects of any non-trivial system being modeled, we know models can potentially leave out important, even critical elements. This reality of the modeling enterprise forces us to consider the prospective impacts of those effects completely left out of a model - either intentionally or unconsidered. Insensitivity to new structure is an indication of diminishing returns. In this work, we represent a hypothetical unknown effect on a validated model as a finite perturba- tion whose amplitude is constrained within a control region. We find robustly that without further constraints, no meaningful bounds can be placed on the amplitude of a perturbation outside of the control region. Thus, forecasting into unsampled regions is a very risky proposition. We also present inherent difficulties with proper time discretization of models and representing in- herently discrete quantities. We point out potentially worrisome uncertainties, arising from math- ematical formulation alone, which modelers can inadvertently introduce into models of complex systems. Acknowledgements This work has been funded under early-career LDRD project #170979, entitled "Quantify- ing Confidence in Complex Systems Models Having Structural Uncertainties", which ran from 04/2013 to 09/2014. We wish to express our gratitude to the many researchers at Sandia who con- tributed ideas to this work, as well as feedback on the manuscript. In particular, we would like to mention George Barr, Alexander Outkin, Walt Beyeler, Eric Vugrin, and Laura Swiler for provid- ing invaluable advice and guidance through the course of the project. We would also like to thank Steven Kleban, Amanda Gonzales, Trevor Manzanares, and Sarah Burwell for their assistance in managing project tasks and resources.
Modeling auditory evoked potentials to complex stimuli
Rønne, Filip Munch
The auditory evoked potential (AEP) is an electrical signal that can be recorded from electrodes attached to the scalp of a human subject when a sound is presented. The signal is considered to reflect neural activity in response to the acoustic stimulation and is a well established clinical...... clinically and in research towards using realistic and complex stimuli, such as speech, to electrophysiologically assess the human hearing. However, to interpret the AEP generation to complex sounds, the potential patterns in response to simple stimuli needs to be understood. Therefore, the model was used...... to simulate auditory brainstem responses (ABRs) evoked by classic stimuli like clicks, tone bursts and chirps. The ABRs to these simple stimuli were compared to literature data and the model was shown to predict the frequency dependence of tone-burst ABR wave-V latency and the level-dependence of ABR wave...
Disentangling Pleiotropy along the Genome using Sparse Latent Variable Models
Janss, Luc
Bayesian models are described that use atent variables to model covariances. These models are flexible, scale up linearly in the number of traits, and allow separating covariance structures in different components at the trait level and at the genomic level. Multi-trait version of the BayesA (MT......-BA) and Bayesian LASSO (MT-BL) are described that model heterogeneous variance and covariance over the genome, and a model that directly models multiple genomic breeding values (MT-MG), representing different genomic covariance structures. The models are demonstrated on a mouse data set to model the genomic...
Modelling of variability of the chemically peculiar star phi Draconis
Prvák, Milan; Krtička, Jiří; Mikulášek, Zdeněk; Lüftinger, T
2015-01-01
Context: The presence of heavier chemical elements in stellar atmospheres influences the spectral energy distribution (SED) of stars. An uneven surface distribution of these elements, together with flux redistribution and stellar rotation, are commonly believed to be the primary causes of the variability of chemically peculiar (CP) stars. Aims: We aim to model the photometric variability of the CP star PHI Dra based on the assumption of inhomogeneous surface distribution of heavier elements and compare it to the observed variability of the star. We also intend to identify the processes that contribute most significantly to its photometric variability. Methods: We use a grid of TLUSTY model atmospheres and the SYNSPEC code to model the radiative flux emerging from the individual surface elements of PHI Dra with different chemical compositions. We integrate the emerging flux over the visible surface of the star at different phases throughout the entire rotational period to synthesise theoretical light curves of...
Complex Evaluation Model of Corporate Energy Management
Ágnes Kádár Horváth
2014-01-01
With the ever increasing energy problems at the doorstep alongside with political, economic, social and environmental challenges, conscious energy management has become of increasing importance in corporate resource management. Rising energy costs, stricter environmental and climate regulations as well as considerable changes in the energy market require companies to rationalise their energy consumption and cut energy costs. This study presents a complex evaluation model of corporate energy m...
FRAM Modelling Complex Socio-technical Systems
Hollnagel, Erik
2012-01-01
There has not yet been a comprehensive method that goes behind 'human error' and beyond the failure concept, and various complicated accidents have accentuated the need for it. The Functional Resonance Analysis Method (FRAM) fulfils that need. This book presents a detailed and tested method that can be used to model how complex and dynamic socio-technical systems work, and understand both why things sometimes go wrong but also why they normally succeed.
Noncommutative complex Grosse-Wulkenhaar model
Hounkonnou, Mahouton Norbert
2012-01-01
This paper stands for an application of the noncommutative (NC) Noether theorem, given in our previous work [AIP Proc 956 (2007) 55-60], for the NC complex Grosse-Wulkenhaar model. It provides with an extension of a recent work [Physics Letters B 653 (2007) 343-345]. The local conservation of energy-momentum tensors (EMTs) is recovered using improvement procedures based on Moyal algebraic techniques. Broken dilatation symmetry is discussed. NC gauge currents are also explicitly computed.
[Definition and variables of complexity of nursing care: a literature review].
Bravetti, Chiara; Cocchieri, Antonello; D'Agostino, Fabio; Vellone, Ercole; Alvaro, Rosaria; Zega, Maurizio
2016-01-01
Complexity of nursing care represents an important indicator in the planning and management of nursing resources and healthcare management. However, the term is not clearly defined in the literature. The aim of this article is to outline the main concepts associated with complexity of nursing care, trying to shed light on the different variables that constitute it. We conducted a review of the literature and selected 12 articles. The terms associated with the concept of complexity of nursing care include nursing intensity, nursing work, nursing workload, patient acuity and severity of illness. The literature review indicates that complexity of nursing care appears to be one of the variables of care intensity, the latter being defined as a commitment of care delivered to the patient. It is associated with the concepts of nursing work, nursing workload, patient acuity and severity of illness. Understanding and clarifying the concept of complexity of care is fundamental in order to measure and evaluate the real demand for nursing care by individual patients.
VAM2D: Variably saturated analysis model in two dimensions
Huyakorn, P.S.; Kool, J.B.; Wu, Y.S. (HydroGeoLogic, Inc., Herndon, VA (United States))
1991-10-01
This report documents a two-dimensional finite element model, VAM2D, developed to simulate water flow and solute transport in variably saturated porous media. Both flow and transport simulation can be handled concurrently or sequentially. The formulation of the governing equations and the numerical procedures used in the code are presented. The flow equation is approximated using the Galerkin finite element method. Nonlinear soil moisture characteristics and atmospheric boundary conditions (e.g., infiltration, evaporation and seepage face), are treated using Picard and Newton-Raphson iterations. Hysteresis effects and anisotropy in the unsaturated hydraulic conductivity can be taken into account if needed. The contaminant transport simulation can account for advection, hydrodynamic dispersion, linear equilibrium sorption, and first-order degradation. Transport of a single component or a multi-component decay chain can be handled. The transport equation is approximated using an upstream weighted residual method. Several test problems are presented to verify the code and demonstrate its utility. These problems range from simple one-dimensional to complex two-dimensional and axisymmetric problems. This document has been produced as a user's manual. It contains detailed information on the code structure along with instructions for input data preparation and sample input and printed output for selected test problems. Also included are instructions for job set up and restarting procedures. 44 refs., 54 figs., 24 tabs.
Probabilistic Analysis Techniques Applied to Complex Spacecraft Power System Modeling
Hojnicki, Jeffrey S.; Rusick, Jeffrey J.
2005-01-01
Electric power system performance predictions are critical to spacecraft, such as the International Space Station (ISS), to ensure that sufficient power is available to support all the spacecraft s power needs. In the case of the ISS power system, analyses to date have been deterministic, meaning that each analysis produces a single-valued result for power capability because of the complexity and large size of the model. As a result, the deterministic ISS analyses did not account for the sensitivity of the power capability to uncertainties in model input variables. Over the last 10 years, the NASA Glenn Research Center has developed advanced, computationally fast, probabilistic analysis techniques and successfully applied them to large (thousands of nodes) complex structural analysis models. These same techniques were recently applied to large, complex ISS power system models. This new application enables probabilistic power analyses that account for input uncertainties and produce results that include variations caused by these uncertainties. Specifically, N&R Engineering, under contract to NASA, integrated these advanced probabilistic techniques with Glenn s internationally recognized ISS power system model, System Power Analysis for Capability Evaluation (SPACE).
Complex variable method for plane elasticity of icosahedral quasicrystals and elliptic notch problem
2008-01-01
The complex variable method for the plane elasticity theory of icosahedral quasicrystals is developed. Based on the general solution obtained previously, complex representations of stress and displacement components of phonon and phason fields in the quasicrystals are given. With the help of conformal transformation, an analytic solution for the elliptic notch problem of the material is presented. The solution of the Griffith crack problem can be observed as a special case of the results. The stress intensity factor and energy release rate of the crack are also obtained.
Westneat, David F; Stewart, Ian R K; Hatch, Margret I
2009-05-01
Phenotypic plasticity is a widespread phenomenon and may have important influences on evolutionary processes. Multidimensional plasticity, in which multiple environmental variables affect a phenotype, is especially interesting if there are interactions among these variables. We used a long-term data set from House Sparrows (Passer domesticus), a multi-brooded passerine bird, to test several predictions from life-history theory regarding the shape of optimal reaction norms for clutch size. The best-fit model for variation in clutch size included three temporal variables (the order of attempt within a season, the date of those attempts, and the age of the female). Clutch size was also sensitive to the quadratics of date and female age, both of which had negative coefficients. Finally, we found that the relationship between date and clutch size became more negative as attempt order increased. These results suggest that female sparrows have a multidimensional reaction norm for clutch size that matches predictions of life-history theory but also implicates more complexity than can be captured by any single model. Analysis of the sources of variation in reaction norm height and slope was complicated by the additional environmental dimensions. We found significant individual variation in mean clutch size in all analyses, indicating that individuals differed in the height of their clutch size reaction norm. By contrast, we found no evidence of significant individual heterogeneity in the slopes of several dimensions. We assess the possible mechanisms producing this reaction norm and discuss their implications for understanding complex plasticity.
The noisy voter model on complex networks
Carro, Adrián; Miguel, Maxi San
2016-01-01
We propose a new analytical method to study stochastic, binary-state models on complex networks. Moving beyond the usual mean-field theories, this alternative approach is based on the introduction of an uncorrelated network approximation, allowing to deal with the network structure as parametric heterogeneity. As an illustration, we study the noisy voter model, a modification of the original voter model including random changes of state. The proposed method is able to unfold the dependence of the model not only on the mean degree (the mean-field prediction) but also on more complex averages over the degree distribution. In particular, we find that the degree heterogeneity ---variance of the underlying degree distribution--- has a strong influence on the location of the critical point of a noise-induced, finite-size transition occurring in the model, on the local ordering of the system, and on the functional form of its temporal correlations. Finally, we show how this latter point opens the possibility of infe...
The complex variable reproducing kernel particle method for elasto-plasticity problems
无
2010-01-01
On the basis of reproducing kernel particle method(RKPM),using complex variable theory,the complex variable reproducing kernel particle method(CVRKPM) is discussed in this paper.The advantage of the CVRKPM is that the correction function of a two-dimensional problem is formed with one-dimensional basis function when the shape function is formed.Then the CVRKPM is applied to solve two-dimensional elasto-plasticity problems.The Galerkin weak form is employed to obtain the discretized system equation,the penalty method is used to apply the essential boundary conditions.And then,the CVRKPM for two-dimensional elasto-plasticity problems is formed,the corresponding formulae are obtained,and the Newton-Raphson method is used in the numerical implementation.Three numerical examples are given to show that this method in this paper is effective for elasto-plasticity analysis.
A complex variable meshless local Petrov-Galerkin method for transient heat conduction problems
Wang Qi-Fang; Dai Bao-Dong; Li Zhen-Feng
2013-01-01
On the basis of the complex variable moving least-square (CVMLS) approximation,a complex variable meshless local Petrov-Galerkin (CVMLPG) method is presented for transient heat conduction problems.The method is developed based on the CVMLS approximation for constructing shape functions at scattered points,and the Heaviside step function is used as a test function in each sub-domain to avoid the need for a domain integral in symmetric weak form.In the construction of the well-performed shape function,the trial function of a two-dimensional (2D) problem is formed with a one-dimensional (1 D) basis function,thus improving computational efficiency.The numerical results are compared with the exact solutions of the problems and the finite element method (FEM).This comparison illustrates the accuracy as well as the capability of the CVMLPG method.
Yang Xiu-Li; Dai Bao-Dong; Zhang Wei-Wei
2012-01-01
Based on the complex variable moving least-square (CVMLS) approximation and a local symmetric weak form,the complex variable meshless local Petrov-Galerkin (CVMLPG) method of solving two-dimensional potential problems is presented in this paper.In the present formulation,the trial function of a two-dimensional problem is formed with a one-dimensional basis function.The number of unknown coefficients in the trial function of the CVMLS approximation is less than that in the trial function of the moving least-square (MLS) approximation.The essential boundary conditions are imposed by the penalty method.The main advantage of this approach over the conventional meshless local PetrovGalerkin (MLPG) method is its computational efficiency.Several numerical examples are presented to illustrate the implementation and performance of the present CVMLPG method.
A study on bornological properties of the space of entire functions of several complex variables
Mushtaq Shaker Abdul-Hussein
2002-12-01
Full Text Available Spaces of entire functions of several complex variables occupy an important position in view of their vast applications in various branches of mathematics, for instance, the classical analysis, theory of approximation, theory of topological bases etc. With an idea of correlating entire functions with certain aspects in the theory of basis in locally convex spaces, we have investigated in this paper the bornological aspects of the space $X$ of integral functions of several complex variables. By $Y$ we denote the space of all power series with positive radius of convergence at the origin. We introduce bornologies on $X$ and $Y$ and prove that $Y$ is a convex bornological vector space which is the completion of the convex bornological vector space $X$.
Efficient Construction of Discrete Adjoint Operators on Unstructured Grids Using Complex Variables
Nielsen, Eric J.; Kleb, William L.
2005-01-01
A methodology is developed and implemented to mitigate the lengthy software development cycle typically associated with constructing a discrete adjoint solver for aerodynamic simulations. The approach is based on a complex-variable formulation that enables straightforward differentiation of complicated real-valued functions. An automated scripting process is used to create the complex-variable form of the set of discrete equations. An efficient method for assembling the residual and cost function linearizations is developed. The accuracy of the implementation is verified through comparisons with a discrete direct method as well as a previously developed handcoded discrete adjoint approach. Comparisons are also shown for a large-scale configuration to establish the computational efficiency of the present scheme. To ultimately demonstrate the power of the approach, the implementation is extended to high temperature gas flows in chemical nonequilibrium. Finally, several fruitful research and development avenues enabled by the current work are suggested.
Nielsen, Eric J.; Kleb, William L.
2005-01-01
A methodology is developed and implemented to mitigate the lengthy software development cycle typically associated with constructing a discrete adjoint solver for aerodynamic simulations. The approach is based on a complex-variable formulation that enables straightforward differentiation of complicated real-valued functions. An automated scripting process is used to create the complex-variable form of the set of discrete equations. An efficient method for assembling the residual and cost function linearizations is developed. The accuracy of the implementation is verified through comparisons with a discrete direct method as well as a previously developed handcoded discrete adjoint approach. Comparisons are also shown for a large-scale configuration to establish the computational efficiency of the present scheme. To ultimately demonstrate the power of the approach, the implementation is extended to high temperature gas flows in chemical nonequilibrium. Finally, several fruitful research and development avenues enabled by the current work are suggested.
A new complex variable meshless method for transient heat conduction problems
Wang Jian-Fei; Cheng Yu-Min
2012-01-01
In this paper,based on the improved complex variable moving least-square (ICVMLS) approximation,a new complex variable meshless method (CVMM) for two-dimensional (2D) transient heat conduction problems is presented.The variational method is employed to obtain the discrete equations,and the essential boundary conditions are imposed by the penalty method.As the transient heat conduction problems are related to time,the Crank-Nicolson difference scheme for two-point boundary value problems is selected for the time discretization.Then the corresponding formulae of the CVMM for 2D heat conduction problems are obtained.In order to demonstrate the applicability of the proposed method,numerical examples are given to show the high convergence rate,good accuracy,and high efficiency of the CVMM presented in this paper.
Recurrence Plot Based Measures of Complexity and its Application to Heart Rate Variability Data
Marwan, N; Meyerfeldt, U; Schirdewan, A; Kurths, J
2002-01-01
In complex systems the knowledge of transitions between regular, laminar or chaotic behavior is essential to understand the processes going on there. Linear approaches are often not sufficient to describe these processes and several nonlinear methods require rather long time observations. To overcome these difficulties, we propose measures of complexity based on vertical structures in recurrence plots and apply them to the logistic map as well as to heart rate variability data. For the logistic map these measures enable us to detect transitions between chaotic and periodic states, as well as to identify additional laminar states, i.e. chaos-chaos transitions. Traditional recurrence quantification analysis fails to detect these latter transitions. Applying our new measures to the heart rate variability data, we are able to detect and quantify laminar phases before a life-threatening cardiac arrhythmia and, thus, to enable a prediction of such an event. Our findings could be of importance for the therapy of mal...
Quantitative modeling of degree-degree correlation in complex networks
Niño, Alfonso
2013-01-01
This paper presents an approach to the modeling of degree-degree correlation in complex networks. Thus, a simple function, \\Delta(k', k), describing specific degree-to- degree correlations is considered. The function is well suited to graphically depict assortative and disassortative variations within networks. To quantify degree correlation variations, the joint probability distribution between nodes with arbitrary degrees, P(k', k), is used. Introduction of the end-degree probability function as a basic variable allows using group theory to derive mathematical models for P(k', k). In this form, an expression, representing a family of seven models, is constructed with the needed normalization conditions. Applied to \\Delta(k', k), this expression predicts a nonuniform distribution of degree correlation in networks, organized in two assortative and two disassortative zones. This structure is actually observed in a set of four modeled, technological, social, and biological networks. A regression study performed...
Heart rate recovery after exercise: relations to heart rate variability and complexity
Javorka M; Zila I.; Balhárek T.; Javorka K
2002-01-01
Physical exercise is associated with parasympathetic withdrawal and increased sympathetic activity resulting in heart rate increase. The rate of post-exercise cardiodeceleration is used as an index of cardiac vagal reactivation. Analysis of heart rate variability (HRV) and complexity can provide useful information about autonomic control of the cardiovascular system. The aim of the present study was to ascertain the association between heart rate decrease after exercise and HRV parameters. He...
Heart rate recovery after exercise: relations to heart rate variability and complexity
Javorka M.; Zila I.; Balhárek T.; Javorka K
2002-01-01
Physical exercise is associated with parasympathetic withdrawal and increased sympathetic activity resulting in heart rate increase. The rate of post-exercise cardiodeceleration is used as an index of cardiac vagal reactivation. Analysis of heart rate variability (HRV) and complexity can provide useful information about autonomic control of the cardiovascular system. The aim of the present study was to ascertain the association between heart rate decrease after exercise and HRV parameters. He...
Describing Ecosystem Complexity through Integrated Catchment Modeling
Shope, C. L.; Tenhunen, J. D.; Peiffer, S.
2011-12-01
Land use and climate change have been implicated in reduced ecosystem services (ie: high quality water yield, biodiversity, and agricultural yield. The prediction of ecosystem services expected under future land use decisions and changing climate conditions has become increasingly important. Complex policy and management decisions require the integration of physical, economic, and social data over several scales to assess effects on water resources and ecology. Field-based meteorology, hydrology, soil physics, plant production, solute and sediment transport, economic, and social behavior data were measured in a South Korean catchment. A variety of models are being used to simulate plot and field scale experiments within the catchment. Results from each of the local-scale models provide identification of sensitive, local-scale parameters which are then used as inputs into a large-scale watershed model. We used the spatially distributed SWAT model to synthesize the experimental field data throughout the catchment. The approach of our study was that the range in local-scale model parameter results can be used to define the sensitivity and uncertainty in the large-scale watershed model. Further, this example shows how research can be structured for scientific results describing complex ecosystems and landscapes where cross-disciplinary linkages benefit the end result. The field-based and modeling framework described is being used to develop scenarios to examine spatial and temporal changes in land use practices and climatic effects on water quantity, water quality, and sediment transport. Development of accurate modeling scenarios requires understanding the social relationship between individual and policy driven land management practices and the value of sustainable resources to all shareholders.
Complex Constructivism: A Theoretical Model of Complexity and Cognition
Doolittle, Peter E.
2014-01-01
Education has long been driven by its metaphors for teaching and learning. These metaphors have influenced both educational research and educational practice. Complexity and constructivism are two theories that provide functional and robust metaphors. Complexity provides a metaphor for the structure of myriad phenomena, while constructivism…
Dustin J.WILGERS; Eileen A.HEBETS
2011-01-01
Effective signal transmission is essential for communication.In environments where signal transmission is highly variable,signalers may utilize complex signals,which incorporate multiple components and modalities,to maintain effective communication.Male Rabidosa rabida wolf spiders produce complex courtship signals,consisting of both visual and seismic components.We test the hypothesis that the complex signaling of R.rabida contributes to male reproductive success in variable signaling environments.We first examine the condition-dependence of foreleg ornamentation(a presumed visual signal)and seismic signal components and find that both may provide potentially redundant information on foraging history.Next,we assessed reproductive success across manipulated signaling environments that varied in the effectiveness of visual and/or seismic signal transmission.in environmenis where only one signal could be successfully transmitted(e.g.,visual or seismic),pairs were still able to successfully copulate.Additionally,we found that males altered their courtship display depending on the current signaling environment.Specifically,males reduced their use of a visual display component in signaling environments where visual signal transmission was ablated.Incorporating signals in multiple modalities not only enables R.rabida males to maintain copulation success across variable signaiing environments,but it also enables males to adjust their composite courtship display to current signaling conditions.
Modeling urban expansion by using variable weights logistic cellular automata
Shu, Bangrong; Bakker, Martha M.; Zhang, Honghui; Li, Yongle; Qin, Wei; Carsjens, Gerrit J.
2017-01-01
Simulation models based on cellular automata (CA) are widely used for understanding and simulating complex urban expansion process. Among these models, logistic CA (LCA) is commonly adopted. However, the performance of LCA models is often limited because the fixed coefficients obtained from binary
Financial applications of a Tabu search variable selection model
Zvi Drezner
2001-01-01
Full Text Available We illustrate how a comparatively new technique, a Tabu search variable selection model [Drezner, Marcoulides and Salhi (1999], can be applied efficiently within finance when the researcher must select a subset of variables from among the whole set of explanatory variables under consideration. Several types of problems in finance, including corporate and personal bankruptcy prediction, mortgage and credit scoring, and the selection of variables for the Arbitrage Pricing Model, require the researcher to select a subset of variables from a larger set. In order to demonstrate the usefulness of the Tabu search variable selection model, we: (1 illustrate its efficiency in comparison to the main alternative search procedures, such as stepwise regression and the Maximum R2 procedure, and (2 show how a version of the Tabu search procedure may be implemented when attempting to predict corporate bankruptcy. We accomplish (2 by indicating that a Tabu Search procedure increases the predictability of corporate bankruptcy by up to 10 percentage points in comparison to Altman's (1968 Z-Score model.
Variability in a Community-Structured SIS Epidemiological Model.
Hiebeler, David E; Rier, Rachel M; Audibert, Josh; LeClair, Phillip J; Webber, Anna
2015-04-01
We study an SIS epidemiological model of a population partitioned into groups referred to as communities, households, or patches. The system is studied using stochastic spatial simulations, as well as a system of ordinary differential equations describing moments of the distribution of infectious individuals. The ODE model explicitly includes the population size, as well as the variability in infection levels among communities and the variability among stochastic realizations of the process. Results are compared with an earlier moment-based model which assumed infinite population size and no variance among realizations of the process. We find that although the amount of localized (as opposed to global) contact in the model has little effect on the equilibrium infection level, it does affect both the timing and magnitude of both types of variability in infection level.
A 3-mode, Variable Velocity Jet Model for HH 34
Raga, A.; Noriega-Crespo, A.
1998-01-01
Variable ejection velocity jet models can qualitatively explain the appearance of successive working surfaces in Herbig-Haro (HH) jets. This paper presents an attempt to explore which features of the HH 34 jet can indeed be reproduced by such a model.
Manifest Variable Granger Causality Models for Developmental Research: A Taxonomy
von Eye, Alexander; Wiedermann, Wolfgang
2015-01-01
Granger models are popular when it comes to testing hypotheses that relate series of measures causally to each other. In this article, we propose a taxonomy of Granger causality models. The taxonomy results from crossing the four variables Order of Lag, Type of (Contemporaneous) Effect, Direction of Effect, and Segment of Dependent Series…
An Alternative Approach for Nonlinear Latent Variable Models
Mooijaart, Ab; Bentler, Peter M.
2010-01-01
In the last decades there has been an increasing interest in nonlinear latent variable models. Since the seminal paper of Kenny and Judd, several methods have been proposed for dealing with these kinds of models. This article introduces an alternative approach. The methodology involves fitting some third-order moments in addition to the means and…
Modeling, analysis and control of a variable geometry actuator
Evers, W.J.; Knaap, A. van der; Besselink, I.J.M.; Nijmeijer, H.
2008-01-01
A new design of variable geometry force actuator is presented in this paper. Based upon this design, a model is derived which is used for steady-state analysis, as well as controller design in the presence of friction. The controlled actuator model is finally used to evaluate the power consumption u
Modelling avalanche danger and understanding snow depth variability
2010-01-01
This thesis addresses the causes of avalanche danger at a regional scale. Modelled snow stratigraphy variables were linked to [1] forecasted avalanche danger and [2] observed snowpack stability. Spatial variability of snowpack parameters in a region is an additional important factor that influences the avalanche danger. Snow depth and its change during individual snow fall periods are snowpack parameters which can be measured at a high spatial resolution. Hence, the spatial distribution of sn...
Wind and diffusion modeling for complex terrain
Cox, R.M.; Sontowski, J.; Fry, R.N. Jr. [and others
1996-12-31
Atmospheric transport and dispersion over complex terrain were investigated. Meteorological and sulfur hexafluoride (SF{sub 6}) concentration data were collected and used to evaluate the performance of a transport and diffusion model coupled with a mass consistency wind field model. Meteorological data were collected throughout April 1995. Both meteorological and concentration data were measured in December 1995. The data included 11 to 15 surface stations, 1 to 3 upper air stations, and 1 mobile profiler. A range of conditions was encountered, including inversion and post-inversion breakup, light to strong winds, and a broad distribution of wind directions. The models used included the SCIPUFF (Second-order Closure Integrated Puff) transport and diffusion model and the MINERVE mass consistency wind model. Evaluation of the models was focused primarily on their effectiveness as a short term (one to four hours) predictive tool. These studies showed how they can be used to help direct emergency response following a hazardous material release. For purposes of the experiments, the models were used to direct the deployment of mobile sensors intended to intercept and measure tracer clouds.
Marcovitz, Alan B., Ed.
Described is the use of an analog/hybrid computer installation to study those physical phenomena that can be described through the evaluation of an algebraic function of a complex variable. This is an alternative way to study such phenomena on an interactive graphics terminal. The typical problem used, involving complex variables, is that of…
Structured analysis and modeling of complex systems
Strome, David R.; Dalrymple, Mathieu A.
1992-01-01
The Aircrew Evaluation Sustained Operations Performance (AESOP) facility at Brooks AFB, Texas, combines the realism of an operational environment with the control of a research laboratory. In recent studies we collected extensive data from the Airborne Warning and Control Systems (AWACS) Weapons Directors subjected to high and low workload Defensive Counter Air Scenarios. A critical and complex task in this environment involves committing a friendly fighter against a hostile fighter. Structured Analysis and Design techniques and computer modeling systems were applied to this task as tools for analyzing subject performance and workload. This technology is being transferred to the Man-Systems Division of NASA Johnson Space Center for application to complex mission related tasks, such as manipulating the Shuttle grappler arm.
Nisha Haridas
2016-04-01
Full Text Available A low complexity digital hearing aid is designed using a set of subband filters, for various audiograms. It is important for the device to be made of simple hardware, so that the device becomes less bulky. Hence, a low complexity design of reconfigurable filter is proposed in this paper. The tunable filter structure is designed using Farrow based variable bandwidth filter. The coefficients of the filter are expressed in canonic signed digit format. The performance can be enhanced using optimization algorithm. Here, we have explored the strength of hybrid evolutionary algorithms and compared their various combinations to select a proper coefficient representation for the Farrow based filter, which results in low complexity implementation.
Superelement Verification in Complex Structural Models
B. Dupont
2008-01-01
Full Text Available The objective of this article is to propose decision indicators to guide the analyst in the optimal definition of an ensemble of superelements in a complex structural assembly. These indicators are constructed based on comparisons between the unreduced physical model and the approximate solution provided by a nominally reduced superelement model. First, the low contribution substructure slave modes are filtered. Then, the minimum dynamical residual expansion is used to localize the superelements which are the most responsible for the response prediction errors. Moreover, it is shown that static residual vectors, which are a natural result of these calculations, can be included to represent the contribution of important truncated slave modes and consequently correct the deficient superelements. The proposed methodology is illustrated on a subassembly of an aeroengine model.
Seyfried, M. S.; Link, T. E.
2013-12-01
Soil temperature (Ts) exerts critical environmental controls on hydrologic and biogeochemical processes. Rates of carbon cycling, mineral weathering, infiltration and snow melt are all influenced by Ts. Although broadly reflective of the climate, Ts is sensitive to local variations in cover (vegetative, litter, snow), topography (slope, aspect, position), and soil properties (texture, water content), resulting in a spatially and temporally complex distribution of Ts across the landscape. Understanding and quantifying the processes controlled by Ts requires an understanding of that distribution. Relatively few spatially distributed field Ts data exist, partly because traditional Ts data are point measurements. A relatively new technology, fiber optic distributed temperature system (FO-DTS), has the potential to provide such data but has not been rigorously evaluated in the context of remote, long term field research. We installed FO-DTS in a small experimental watershed in the Reynolds Creek Experimental Watershed (RCEW) in the Owyhee Mountains of SW Idaho. The watershed is characterized by complex terrain and a seasonal snow cover. Our objectives are to: (i) evaluate the applicability of fiber optic DTS to remote field environments and (ii) to describe the spatial and temporal variability of soil temperature in complex terrain influenced by a variable snow cover. We installed fiber optic cable at a depth of 10 cm in contrasting snow accumulation and topographic environments and monitored temperature along 750 m with DTS. We found that the DTS can provide accurate Ts data (+/- .4°C) that resolves Ts changes of about 0.03°C at a spatial scale of 1 m with occasional calibration under conditions with an ambient temperature range of 50°C. We note that there are site-specific limitations related cable installation and destruction by local fauna. The FO-DTS provide unique insight into the spatial and temporal variability of Ts in a landscape. We found strong seasonal
Internal variability of a 3-D ocean model
Bjarne Büchmann
2016-11-01
Full Text Available The Defence Centre for Operational Oceanography runs operational forecasts for the Danish waters. The core setup is a 60-layer baroclinic circulation model based on the General Estuarine Transport Model code. At intervals, the model setup is tuned to improve ‘model skill’ and overall performance. It has been an area of concern that the uncertainty inherent to the stochastical/chaotic nature of the model is unknown. Thus, it is difficult to state with certainty that a particular setup is improved, even if the computed model skill increases. This issue also extends to the cases, where the model is tuned during an iterative process, where model results are fed back to improve model parameters, such as bathymetry.An ensemble of identical model setups with slightly perturbed initial conditions is examined. It is found that the initial perturbation causes the models to deviate from each other exponentially fast, causing differences of several PSUs and several kelvin within a few days of simulation. The ensemble is run for a full year, and the long-term variability of salinity and temperature is found for different regions within the modelled area. Further, the developing time scale is estimated for each region, and great regional differences are found – in both variability and time scale. It is observed that periods with very high ensemble variability are typically short-term and spatially limited events.A particular event is examined in detail to shed light on how the ensemble ‘behaves’ in periods with large internal model variability. It is found that the ensemble does not seem to follow any particular stochastic distribution: both the ensemble variability (standard deviation or range as well as the ensemble distribution within that range seem to vary with time and place. Further, it is observed that a large spatial variability due to mesoscale features does not necessarily correlate to large ensemble variability. These findings bear
Analysis models for variables associated with breastfeeding duration.
dos S Neto, Edson Theodoro; Zandonade, Eliana; Emmerich, Adauto Oliveira
2013-09-01
OBJECTIVE To analyze the factors associated with breastfeeding duration by two statistical models. METHODS A population-based cohort study was conducted with 86 mothers and newborns from two areas primary covered by the National Health System, with high rates of infant mortality in Vitória, Espírito Santo, Brazil. During 30 months, 67 (78%) children and mothers were visited seven times at home by trained interviewers, who filled out survey forms. Data on food and sucking habits, socioeconomic and maternal characteristics were collected. Variables were analyzed by Cox regression models, considering duration of breastfeeding as the dependent variable, and logistic regression (dependent variables, was the presence of a breastfeeding child in different post-natal ages). RESULTS In the logistic regression model, the pacifier sucking (adjusted Odds Ratio: 3.4; 95%CI 1.2-9.55) and bottle feeding (adjusted Odds Ratio: 4.4; 95%CI 1.6-12.1) increased the chance of weaning a child before one year of age. Variables associated to breastfeeding duration in the Cox regression model were: pacifier sucking (adjusted Hazard Ratio 2.0; 95%CI 1.2-3.3) and bottle feeding (adjusted Hazard Ratio 2.0; 95%CI 1.2-3.5). However, protective factors (maternal age and family income) differed between both models. CONCLUSIONS Risk and protective factors associated with cessation of breastfeeding may be analyzed by different models of statistical regression. Cox Regression Models are adequate to analyze such factors in longitudinal studies.
Modeling the human prothrombinase complex components
Orban, Tivadar
Thrombin generation is the culminating stage of the blood coagulation process. Thrombin is obtained from prothrombin (the substrate) in a reaction catalyzed by the prothrombinase complex (the enzyme). The prothrombinase complex is composed of factor Xa (the enzyme), factor Va (the cofactor) associated in the presence of calcium ions on a negatively charged cell membrane. Factor Xa, alone, can activate prothrombin to thrombin; however, the rate of conversion is not physiologically relevant for survival. Incorporation of factor Va into prothrombinase accelerates the rate of prothrombinase activity by 300,000-fold, and provides the physiological pathway of thrombin generation. The long-term goal of the current proposal is to provide the necessary support for the advancing of studies to design potential drug candidates that may be used to avoid development of deep venous thrombosis in high-risk patients. The short-term goals of the present proposal are to (1) to propose a model of a mixed asymmetric phospholipid bilayer, (2) expand the incomplete model of human coagulation factor Va and study its interaction with the phospholipid bilayer, (3) to create a homology model of prothrombin (4) to study the dynamics of interaction between prothrombin and the phospholipid bilayer.
Modelling of an hydraulic excavator using simplified refined instrumental variable(SRIV)algorithm
无
2007-01-01
Instead of establishing mathematical hydraulic system models from physical laws usually done with the problems of complex modelling processes,low reliability and practicality caused by large uncertainties,a novel modelling method for a highly nonlinear system of a hydraulic excavator is presented.Based on the data collected in the excavator's arms driving experiments,a data-based excavator dynamic model using Simplified Refined Instrumental Variable(SRIV)identification and estimation algorithms is established.The validity of the proposed data-based model is indirectly demonstrated by the performance of computer simulation and the real machine motion control experiments.
Mujica-Parodi, L R; Yeragani, Vikram; Malaspina, Dolores
2005-01-01
Heart rate variability (HRV) reflects functioning of the autonomic nervous system and possibly also regulation by the neural limbic system, abnormalities of which have both figured prominently in various etiological models of schizophrenia, particularly those that address patients' vulnerability to stress in connection to psychosis onset and exacerbation. This study provides data on cardiac functioning in a sample of schizophrenia patients that were either medication free or on atypical antipsychotics, as well as cardiac data on matched healthy controls. We included a medication-free group to investigate whether abnormalities in HRV previously reported in the literature and associated with atypical antipsychotics were solely the effect of medications or whether they might be a feature of the illness (or psychosis) itself. We collected 24-hour ECGs on 19 patients and 24 controls. Of the patients, 9 were medication free and 10 were on atypical antipsychotics. All subject groups were matched for age and gender. Patient groups showed equivalent symptom severity and type, as well as duration of illness. We analyzed the data using nonlinear complexity (symbolic dynamic) HRV analyses as well as standard and relative spectral analyses. For the medication-free patients as compared to the healthy controls, our data show decreased R-R intervals during sleep, and abnormal suppression of all frequency ranges, but particularly the low frequency range, which persisted even after adjusting the spectral data for the mean R-R interval. This effect was exacerbated for patients on atypical antipsychotics. Likewise, nonlinear complexity analysis showed significantly impaired HRV for medication-free patients that was exacerbated in the patients on atypical antipsychotics. Altogether, the data suggest a pattern of significantly decreased cardiac vagal function of patients with schizophrenia as compared to healthy controls, apart from and beyond any differences due to medication side
Wei-feng LU; Mi LIN; Ling-ling SUN
2009-01-01
Neurons with complex-valued weights have stronger capability because of their multi-valued threshold logic. Neurons with such features may be suitable for solution of different kinds of problems including associative memory, image recognition and digital logical mapping. In this paper, robustness or tolerance is introduced and newly defined for this kind of neuron ac-cording to both their mathematical model and the perceptron neuron's definition of robustness. Also, the most robust design for basic digital logics of multiple variables is proposed based on these robust neurons. Our proof procedure shows that, in robust design each weight only takes the value of i or -i, while the value of threshold is with respect to the number of variables. The results demonstrate the validity and simplicity of using robust neurons for realizing arbitrary digital logical functions.
Cosmological Models with Variable Deceleration Parameter in Lyra's Manifold
Pradhan, A; Singh, C B
2006-01-01
FRW models of the universe have been studied in the cosmological theory based on Lyra's manifold. A new class of exact solutions has been obtained by considering a time dependent displacement field for variable deceleration parameter from which three models of the universe are derived (i) exponential (ii) polynomial and (iii) sinusoidal form respectively. The behaviour of these models of the universe are also discussed. Finally some possibilities of further problems and their investigations have been pointed out.
Chaos from simple models to complex systems
Cencini, Massimo; Vulpiani, Angelo
2010-01-01
Chaos: from simple models to complex systems aims to guide science and engineering students through chaos and nonlinear dynamics from classical examples to the most recent fields of research. The first part, intended for undergraduate and graduate students, is a gentle and self-contained introduction to the concepts and main tools for the characterization of deterministic chaotic systems, with emphasis to statistical approaches. The second part can be used as a reference by researchers as it focuses on more advanced topics including the characterization of chaos with tools of information theor
Understanding and forecasting polar stratospheric variability with statistical models
C. Blume
2012-02-01
Full Text Available The variability of the north-polar stratospheric vortex is a prominent aspect of the middle atmosphere. This work investigates a wide class of statistical models with respect to their ability to model geopotential and temperature anomalies, representing variability in the polar stratosphere. Four partly nonstationary, nonlinear models are assessed: linear discriminant analysis (LDA; a cluster method based on finite elements (FEM-VARX; a neural network, namely a multi-layer perceptron (MLP; and support vector regression (SVR. These methods model time series by incorporating all significant external factors simultaneously, including ENSO, QBO, the solar cycle, volcanoes, etc., to then quantify their statistical importance. We show that variability in reanalysis data from 1980 to 2005 is successfully modeled. FEM-VARX and MLP even satisfactorily forecast the period from 2005 to 2011. However, internal variability remains that cannot be statistically forecasted, such as the unexpected major warming in January 2009. Finally, the statistical model with the best generalization performance is used to predict a vortex breakdown in late January, early February 2012.
On Complexity of the Quantum Ising Model
Bravyi, Sergey; Hastings, Matthew
2017-01-01
We study complexity of several problems related to the Transverse field Ising Model (TIM). First, we consider the problem of estimating the ground state energy known as the Local Hamiltonian Problem (LHP). It is shown that the LHP for TIM on degree-3 graphs is equivalent modulo polynomial reductions to the LHP for general k-local `stoquastic' Hamiltonians with any constant {k ≥ 2}. This result implies that estimating the ground state energy of TIM on degree-3 graphs is a complete problem for the complexity class {StoqMA} —an extension of the classical class {MA}. As a corollary, we complete the complexity classification of 2-local Hamiltonians with a fixed set of interactions proposed recently by Cubitt and Montanaro. Secondly, we study quantum annealing algorithms for finding ground states of classical spin Hamiltonians associated with hard optimization problems. We prove that the quantum annealing with TIM Hamiltonians is equivalent modulo polynomial reductions to the quantum annealing with a certain subclass of k-local stoquastic Hamiltonians. This subclass includes all Hamiltonians representable as a sum of a k-local diagonal Hamiltonian and a 2-local stoquastic Hamiltonian.
Estimation of the Heteroskedastic Canonical Contagion Model with Instrumental Variables
2016-01-01
Knowledge of contagion among economies is a relevant issue in economics. The canonical model of contagion is an alternative in this case. Given the existence of endogenous variables in the model, instrumental variables can be used to decrease the bias of the OLS estimator. In the presence of heteroskedastic disturbances this paper proposes the use of conditional volatilities as instruments. Simulation is used to show that the homoscedastic and heteroskedastic estimators which use them as instruments have small bias. These estimators are preferable in comparison with the OLS estimator and their asymptotic distribution can be used to construct confidence intervals. PMID:28030628
The Properties of Model Selection when Retaining Theory Variables
Hendry, David F.; Johansen, Søren
Economic theories are often fitted directly to data to avoid possible model selection biases. We show that embedding a theory model that specifies the correct set of m relevant exogenous variables, x{t}, within the larger set of m+k candidate variables, (x{t},w{t}), then selection over the second...... set by their statistical significance can be undertaken without affecting the estimator distribution of the theory parameters. This strategy returns the theory-parameter estimates when the theory is correct, yet protects against the theory being under-specified because some w{t} are relevant....
Wind and Diffusion Modeling for Complex Terrain.
Cox, Robert M.; Sontowski, John; Fry, Richard N., Jr.; Dougherty, Catherine M.; Smith, Thomas J.
1998-10-01
Atmospheric transport and dispersion over complex terrain were investigated. Meteorological and sulfur hexafluoride (SF6) concentration data were collected and used to evaluate the performance of a transport and diffusion model coupled with a mass consistency wind field model. Meteorological data were collected throughout April 1995. Both meteorological and plume location and concentration data were measured in December 1995. The meteorological data included measurements taken at 11-15 surface stations, one to three upper-air stations, and one mobile profiler. A range of conditions was encountered, including inversion and postinversion breakup, light to strong winds, and a broad distribution of wind directions.The models used were the MINERVE mass consistency wind model and the SCIPUFF (Second-Order Closure Integrated Puff) transport and diffusion model. These models were expected to provide and use high-resolution three-dimensional wind fields. An objective of the experiment was to determine if these models could provide emergency personnel with high-resolution hazardous plume information for quick response operations.Evaluation of the models focused primarily on their effectiveness as a short-term (1-4 h) predictive tool. These studies showed how they could be used to help direct emergency response following a hazardous material release. For purposes of the experiments, the models were used to direct the deployment of mobile sensors intended to intercept and measure tracer clouds.The April test was conducted to evaluate the performance of the MINERVE wind field generation model. It was evaluated during the early morning radiation inversion, inversion dissipation, and afternoon mixed atmosphere. The average deviations in wind speed and wind direction as compared to observations were within 0.4 m s1 and less than 10° for up to 2 h after data time. These deviations increased as time from data time increased. It was also found that deviations were greatest during
Does model performance improve with complexity? A case study with three hydrological models
Orth, Rene; Staudinger, Maria; Seneviratne, Sonia I.; Seibert, Jan; Zappa, Massimiliano
2015-04-01
In recent decades considerable progress has been made in climate model development. Following the massive increase in computational power, models became more sophisticated. At the same time also simple conceptual models have advanced. In this study we validate and compare three hydrological models of different complexity to investigate whether their performance varies accordingly. For this purpose we use runoff and also soil moisture measurements, which allow a truly independent validation, from several sites across Switzerland. The models are calibrated in similar ways with the same runoff data. Our results show that the more complex models HBV and PREVAH outperform the simple water balance model (SWBM) in case of runoff but not for soil moisture. Furthermore the most sophisticated PREVAH model shows an added value compared to the HBV model only in case of soil moisture. Focusing on extreme events we find generally improved performance of the SWBM during drought conditions and degraded agreement with observations during wet extremes. For the more complex models we find the opposite behavior, probably because they were primarily developed for prediction of runoff extremes. As expected given their complexity, HBV and PREVAH have more problems with over-fitting. All models show a tendency towards better performance in lower altitudes as opposed to (pre-) alpine sites. The results vary considerably across the investigated sites. In contrast, the different metrics we consider to estimate the agreement between models and observations lead to similar conclusions, indicating that the performance of the considered models is similar at different time scales as well as for anomalies and long-term means. We conclude that added complexity does not necessarily lead to improved performance of hydrological models, and that performance can vary greatly depending on the considered hydrological variable (e.g. runoff vs. soil moisture) or hydrological conditions (floods vs. droughts).
Efficient family-based model checking via variability abstractions
Dimovski, Aleksandar; Al-Sibahi, Ahmad Salim; Brabrand, Claus
2016-01-01
variational models using the standard version of (single-system) Spin. The variability abstractions are first defined as Galois connections on semantic domains. We then show how to use them for defining abstract family-based model checking, where a variability model is replaced with an abstract version of it......Many software systems are variational: they can be configured to meet diverse sets of requirements. They can produce a (potentially huge) number of related systems, known as products or variants, by systematically reusing common parts. For variational models (variational systems or families...... of related systems), specialized family-based model checking algorithms allow efficient verification of multiple variants, simultaneously, in a single run. These algorithms, implemented in a tool Snip, scale much better than ``the brute force'' approach, where all individual systems are verified using...
Stratified flows with variable density: mathematical modelling and numerical challenges.
Murillo, Javier; Navas-Montilla, Adrian
2017-04-01
Stratified flows appear in a wide variety of fundamental problems in hydrological and geophysical sciences. They may involve from hyperconcentrated floods carrying sediment causing collapse, landslides and debris flows, to suspended material in turbidity currents where turbulence is a key process. Also, in stratified flows variable horizontal density is present. Depending on the case, density varies according to the volumetric concentration of different components or species that can represent transported or suspended materials or soluble substances. Multilayer approaches based on the shallow water equations provide suitable models but are not free from difficulties when moving to the numerical resolution of the governing equations. Considering the variety of temporal and spatial scales, transfer of mass and energy among layers may strongly differ from one case to another. As a consequence, in order to provide accurate solutions, very high order methods of proved quality are demanded. Under these complex scenarios it is necessary to observe that the numerical solution provides the expected order of accuracy but also converges to the physically based solution, which is not an easy task. To this purpose, this work will focus in the use of Energy balanced augmented solvers, in particular, the Augmented Roe Flux ADER scheme. References: J. Murillo , P. García-Navarro, Wave Riemann description of friction terms in unsteady shallow flows: Application to water and mud/debris floods. J. Comput. Phys. 231 (2012) 1963-2001. J. Murillo B. Latorre, P. García-Navarro. A Riemann solver for unsteady computation of 2D shallow flows with variable density. J. Comput. Phys.231 (2012) 4775-4807. A. Navas-Montilla, J. Murillo, Energy balanced numerical schemes with very high order. The Augmented Roe Flux ADER scheme. Application to the shallow water equations, J. Comput. Phys. 290 (2015) 188-218. A. Navas-Montilla, J. Murillo, Asymptotically and exactly energy balanced augmented flux
Extension of association models to complex chemicals
Avlund, Ane Søgaard
Summary of “Extension of association models to complex chemicals”. Ph.D. thesis by Ane Søgaard Avlund The subject of this thesis is application of SAFT type equations of state (EoS). Accurate and predictive thermodynamic models are important in many industries including the petroleum industry....... The SAFT EoS was developed 20 years ago, and a large number of papers on the subject has been published since, but many issues still remain unsolved. These issues are both theoretical and practical. The SAFT theory does not account for intramolecular association, it can only treat flexible chains, and does...... not account for steric self-hindrance for tree-like structures. An important practical problem is how to obtain optimal and consistent parameters. Moreover, multifunctional associating molecules represent a special challenge. In this work two equations of state using the SAFT theory for association are used...
Precalibrating an intermediate complexity climate model
Edwards, Neil R. [The Open University, Earth and Environmental Sciences, Milton Keynes (United Kingdom); Cameron, David [Centre for Ecology and Hydrology, Edinburgh (United Kingdom); Rougier, Jonathan [University of Bristol, Department of Mathematics, Bristol (United Kingdom)
2011-10-15
Credible climate predictions require a rational quantification of uncertainty, but full Bayesian calibration requires detailed estimates of prior probability distributions and covariances, which are difficult to obtain in practice. We describe a simplified procedure, termed precalibration, which provides an approximate quantification of uncertainty in climate prediction, and requires only that uncontroversially implausible values of certain inputs and outputs are identified. The method is applied to intermediate-complexity model simulations of the Atlantic meridional overturning circulation (AMOC) and confirms the existence of a cliff-edge catastrophe in freshwater-forcing input space. When uncertainty in 14 further parameters is taken into account, an implausible, AMOC-off, region remains as a robust feature of the model dynamics, but its location is found to depend strongly on values of the other parameters. (orig.)
Gerretzen, Jan; Szymańska, Ewa; Bart, Jacob; Davies, Antony N; van Manen, Henk-Jan; van den Heuvel, Edwin R; Jansen, Jeroen J; Buydens, Lutgarde M C
2016-09-28
the user. In this work, the approach is illustrated by using PLS as model and PPRV-FCAM (Predictive Property Ranked Variable using Final Complexity Adapted Models) for variable selection. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.
Shahrour, M A; Staretz-Chacham, O; Dayan, D; Stephen, J; Weech, A; Damseh, N; Pri Chen, H; Edvardson, S; Mazaheri, S; Saada, A; Hershkovitz, E; Shaag, A; Huizing, M; Abu-Libdeh, B; Gahl, W A; Azem, A; Anikster, Y; Vilboux, T; Elpeleg, O; Malicdan, M C
2017-05-01
Mitochondrial encephalopathies are a heterogeneous group of disorders that, usually carry grave prognosis. Recently a homozygous mutation, Gly372Ser, in the TIMM50 gene, was reported in an abstract form, in three sibs who suffered from intractable epilepsy and developmental delay accompanied by 3-methylglutaconic aciduria. We now report on four patients from two unrelated families who presented with severe intellectual disability and seizure disorder, accompanied by slightly elevated lactate level, 3-methylglutaconic aciduria and variable deficiency of mitochondrial complex V. Using exome analysis we identified two homozygous missense mutations, Arg217Trp and Thr252Met, in the TIMM50 gene. The TIMM50 protein is a subunit of TIM23 complex, the mitochondrial import machinery. It serves as the major receptor in the intermembrane space, binding to proteins which cross the mitochondrial inner membrane on their way to the matrix. The mutations, which affected evolutionary conserved residues and segregated with the disease in the families, were neither present in large cohorts of control exome analyses nor in our ethnic specific exome cohort. Given the phenotypic similarity, we conclude that missense mutations in TIMM50 are likely manifesting by severe intellectual disability and epilepsy accompanied by 3-methylglutaconic aciduria and variable mitochondrial complex V deficiency. 3-methylglutaconic aciduria is emerging as an important biomarker for mitochondrial dysfunction, in particular for mitochondrial membrane defects. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Chen, Duan; Leon, Arturo S.; Gibson, Nathan L.; Hosseini, Parnian
2016-01-01
Optimizing the operation of a multireservoir system is challenging due to the high dimension of the decision variables that lead to a large and complex search space. A spectral optimization model (SOM), which transforms the decision variables from time domain to frequency domain, is proposed to reduce the dimensionality. The SOM couples a spectral dimensionality-reduction method called Karhunen-Loeve (KL) expansion within the routine of Nondominated Sorting Genetic Algorithm (NSGA-II). The KL expansion is used to represent the decision variables as a series of terms that are deterministic orthogonal functions with undetermined coefficients. The KL expansion can be truncated into fewer significant terms, and consequently, fewer coefficients by a predetermined number. During optimization, operators of the NSGA-II (e.g., crossover) are conducted only on the coefficients of the KL expansion rather than the large number of decision variables, significantly reducing the search space. The SOM is applied to the short-term operation of a 10-reservoir system in the Columbia River of the United States. Two scenarios are considered herein, the first with 140 decision variables and the second with 3360 decision variables. The hypervolume index is used to evaluate the optimization performance in terms of convergence and diversity. The evaluation of optimization performance is conducted for both conventional optimization model (i.e., NSGA-II without KL) and the SOM with different number of KL terms. The results show that the number of decision variables can be greatly reduced in the SOM to achieve a similar or better performance compared to the conventional optimization model. For the scenario with 140 decision variables, the optimal performance of the SOM model is found with six KL terms. For the scenario with 3360 decision variables, the optimal performance of the SOM model is obtained with 11 KL terms.
Dondeynaz, C.; Lopez-Puga, J.; Carmona-Moreno, C.
2012-04-01
Improving Water and Sanitation Services (WSS), being a complex and interdisciplinary issue, passes through collaboration and coordination of different sectors (environment, health, economic activities, governance, and international cooperation). This inter-dependency has been recognised with the adoption of the "Integrated Water Resources Management" principles that push for the integration of these various dimensions involved in WSS delivery to ensure an efficient and sustainable management. The understanding of these interrelations appears as crucial for decision makers in the water sector in particular in developing countries where WSS still represent an important leverage for livelihood improvement. In this framework, the Joint Research Centre of the European Commission has developed a coherent database (WatSan4Dev database) containing 29 indicators from environmental, socio-economic, governance and financial aid flows data focusing on developing countries (Celine et al, 2011 under publication). The aim of this work is to model the WatSan4Dev dataset using probabilistic models to identify the key variables influencing or being influenced by the water supply and sanitation access levels. Bayesian Network Models are suitable to map the conditional dependencies between variables and also allows ordering variables by level of influence on the dependent variable. Separated models have been built for water supply and for sanitation because of different behaviour. The models are validated if complying with statistical criteria but either with scientific knowledge and literature. A two steps approach has been adopted to build the structure of the model; Bayesian network is first built for each thematic cluster of variables (e.g governance, agricultural pressure, or human development) keeping a detailed level for interpretation later one. A global model is then built based on significant indicators of each cluster being previously modelled. The structure of the
Mathematical modeling of variables involved in dissolution testing.
Gao, Zongming
2011-11-01
Dissolution testing is an important technique used for development and quality control of solid oral dosage forms of pharmaceutical products. However, the variability associated with this technique, especially with USP apparatuses 1 and 2, is a concern for both the US Food and Drug Administration and pharmaceutical companies. Dissolution testing involves a number of variables, which can be divided into four main categories: (1) analyst, (2) dissolution apparatus, (3) testing environment, and (4) sample. Both linear and nonlinear models have been used to study dissolution profiles, and various mathematical functions have been used to model the observed data. In this study, several variables, including dissolved gases in the dissolution medium, off-center placement of the test tablet, environmental vibration, and various agitation speeds, were modeled. Mathematical models including Higuchi, Korsmeyer-Peppas, Weibull, and the Noyes-Whitney equation were employed to study the dissolution profile of 10 mg prednisone tablets (NCDA #2) using the USP paddle method. The results showed that the nonlinear models (Korsmeyer-Peppas and Weibull) accurately described the entire dissolution profile. The results also showed that dissolution variables affected dissolution rate constants differently, depending on whether the tablets disintegrated or dissolved.
Dirk Temme
2008-12-01
Full Text Available Integrated choice and latent variable (ICLV models represent a promising new class of models which merge classic choice models with the structural equation approach (SEM for latent variables. Despite their conceptual appeal, applications of ICLV models in marketing remain rare. We extend previous ICLV applications by first estimating a multinomial choice model and, second, by estimating hierarchical relations between latent variables. An empirical study on travel mode choice clearly demonstrates the value of ICLV models to enhance the understanding of choice processes. In addition to the usually studied directly observable variables such as travel time, we show how abstract motivations such as power and hedonism as well as attitudes such as a desire for flexibility impact on travel mode choice. Furthermore, we show that it is possible to estimate such a complex ICLV model with the widely available structural equation modeling package Mplus. This finding is likely to encourage more widespread application of this appealing model class in the marketing field.
Delineating parameter unidentifiabilities in complex models
Raman, Dhruva V.; Anderson, James; Papachristodoulou, Antonis
2017-03-01
Scientists use mathematical modeling as a tool for understanding and predicting the properties of complex physical systems. In highly parametrized models there often exist relationships between parameters over which model predictions are identical, or nearly identical. These are known as structural or practical unidentifiabilities, respectively. They are hard to diagnose and make reliable parameter estimation from data impossible. They furthermore imply the existence of an underlying model simplification. We describe a scalable method for detecting unidentifiabilities, as well as the functional relations defining them, for generic models. This allows for model simplification, and appreciation of which parameters (or functions thereof) cannot be estimated from data. Our algorithm can identify features such as redundant mechanisms and fast time-scale subsystems, as well as the regimes in parameter space over which such approximations are valid. We base our algorithm on a quantification of regional parametric sensitivity that we call `multiscale sloppiness'. Traditionally, the link between parametric sensitivity and the conditioning of the parameter estimation problem is made locally, through the Fisher information matrix. This is valid in the regime of infinitesimal measurement uncertainty. We demonstrate the duality between multiscale sloppiness and the geometry of confidence regions surrounding parameter estimates made where measurement uncertainty is non-negligible. Further theoretical relationships are provided linking multiscale sloppiness to the likelihood-ratio test. From this, we show that a local sensitivity analysis (as typically done) is insufficient for determining the reliability of parameter estimation, even with simple (non)linear systems. Our algorithm can provide a tractable alternative. We finally apply our methods to a large-scale, benchmark systems biology model of necrosis factor (NF)-κ B , uncovering unidentifiabilities.
Variable-complexity aerodynamic optimization of an HSCT wing using structural wing-weight equations
Hutchison, M. G.; Unger, E. R.; Mason, W. H.; Grossman, B.; Haftka, R. T.
1992-01-01
A new approach for combining conceptual and preliminary design techniques for wing optimization is presented for the high-speed civil transport (HSCT). A wing-shape parametrization procedure is developed which allows the linking of planform and airfoil design variables. Variable-complexity design strategies are used to combine conceptual and preliminary-design approaches, both to preserve interdisciplinary design influences and to reduce computational expense. In the study, conceptual-design-level algebraic equations are used to estimate aircraft weight, supersonic wave drag, friction drag and drag due to lift. The drag due to lift and wave drag are also evaluated using more detailed, preliminary-design-level techniques. The methodology is applied to the minimization of the gross weight of an HSCT that flies at Mach 3.0 with a range of 6500 miles.
Modeling variability and trends in pesticide concentrations in streams
Vecchia, A.V.; Martin, J.D.; Gilliom, R.J.
2008-01-01
A parametric regression model was developed for assessing the variability and long-term trends in pesticide concentrations in streams. The dependent variable is the logarithm of pesticide concentration and the explanatory variables are a seasonal wave, which represents the seasonal variability of concentration in response to seasonal application rates; a streamflow anomaly, which is the deviation of concurrent daily streamflow from average conditions for the previous 30 days; and a trend, which represents long-term (inter-annual) changes in concentration. Application of the model to selected herbicides and insecticides in four diverse streams indicated the model is robust with respect to pesticide type, stream location, and the degree of censoring (proportion of nondetections). An automatic model fitting and selection procedure for the seasonal wave and trend components was found to perform well for the datasets analyzed. Artificial censoring scenarios were used in a Monte Carlo simulation analysis to show that the fitted trends were unbiased and the approximate p-values were accurate for as few as 10 uncensored concentrations during a three-year period, assuming a sampling frequency of 15 samples per year. Trend estimates for the full model were compared with a model without the streamflow anomaly and a model in which the seasonality was modeled using standard trigonometric functions, rather than seasonal application rates. Exclusion of the streamflow anomaly resulted in substantial increases in the mean-squared error and decreases in power for detecting trends. Incorrectly modeling the seasonal structure of the concentration data resulted in substantial estimation bias and moderate increases in mean-squared error and decreases in power. ?? 2008 American Water Resources Association.
Wind Power Curve Modeling in Simple and Complex Terrain
Bulaevskaya, V. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Wharton, S. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Irons, Z. [Enel Green Power North America, Andover, MA (United States); Qualley, G. [Pentalum, Colleyville, TX (United States)
2015-02-09
Our previous work on wind power curve modeling using statistical models focused on a location with a moderately complex terrain in the Altamont Pass region in northern California (CA). The work described here is the follow-up to that work, but at a location with a simple terrain in northern Oklahoma (OK). The goal of the present analysis was to determine the gain in predictive ability afforded by adding information beyond the hub-height wind speed, such as wind speeds at other heights, as well as other atmospheric variables, to the power prediction model at this new location and compare the results to those obtained at the CA site in the previous study. While we reach some of the same conclusions at both sites, many results reported for the CA site do not hold at the OK site. In particular, using the entire vertical profile of wind speeds improves the accuracy of wind power prediction relative to using the hub-height wind speed alone at both sites. However, in contrast to the CA site, the rotor equivalent wind speed (REWS) performs almost as well as the entire profile at the OK site. Another difference is that at the CA site, adding wind veer as a predictor significantly improved the power prediction accuracy. The same was true for that site when air density was added to the model separately instead of using the standard air density adjustment. At the OK site, these additional variables result in no significant benefit for the prediction accuracy.
Cambaliza, M. O. L.; Bogner, J. E.; Green, R. B.; Shepson, P. B.; Thoma, E. D.; Foster-wittig, T. A.; Spokas, K.
2014-12-01
Atmospheric methane is a powerful greenhouse gas that is responsible for about 17% of the total direct radiative forcing from long-lived greenhouse gases (IPCC 2013). While the global emission of methane is relatively well quantified, the temporal and spatial variability of methane emissions from individual area or point sources are still poorly understood. Using 4 field methods (aircraft-based mass balance, tracer correlation, vertical radial plume mapping, and static chambers) and a new field-validated process-based model (California Landfill Methane Inventory Model, CALMIM 5.4), we investigated both the total emissions from a central Indiana landfill as well as the partitioned emissions inclusive of methanotrophic oxidation for the various cover soils. This landfill is an upwind source for the city of Indianapolis, so the resolution of m2 to km2 scale emissions, as well as understanding the temporal variability for this complex area source, contributes to improved regional inventory calculations. Emissions for the site as a whole were measured using both an aircraft-based mass balance approach as well as a ground-based tracer correlation method, permitting direct comparison of the strengths, limitations, and uncertainties of these two approaches. Because US landfills are highly-engineered and composed of daily, intermediate, and final cover areas with differing thicknesses, composition, and implementation of gas recovery, we also expected different emission signatures and strengths from the various cover areas. Thus we also deployed static chambers and vertical radial plume mapping to quantify the spatial variability of emissions from the thinner daily and intermediate cover areas. Understanding the daily, seasonal and annual emission rates from a landfill is not trivial, and usually requires a combination of measurement and modeling approaches. Thus, our unique data set provides an opportunity to gain an improved understanding of the emissions from a complex
Modeling heart rate variability including the effect of sleep stages
Soliński, Mateusz; Gierałtowski, Jan; Żebrowski, Jan
2016-02-01
We propose a model for heart rate variability (HRV) of a healthy individual during sleep with the assumption that the heart rate variability is predominantly a random process. Autonomic nervous system activity has different properties during different sleep stages, and this affects many physiological systems including the cardiovascular system. Different properties of HRV can be observed during each particular sleep stage. We believe that taking into account the sleep architecture is crucial for modeling the human nighttime HRV. The stochastic model of HRV introduced by Kantelhardt et al. was used as the initial starting point. We studied the statistical properties of sleep in healthy adults, analyzing 30 polysomnographic recordings, which provided realistic information about sleep architecture. Next, we generated synthetic hypnograms and included them in the modeling of nighttime RR interval series. The results of standard HRV linear analysis and of nonlinear analysis (Shannon entropy, Poincaré plots, and multiscale multifractal analysis) show that—in comparison with real data—the HRV signals obtained from our model have very similar properties, in particular including the multifractal characteristics at different time scales. The model described in this paper is discussed in the context of normal sleep. However, its construction is such that it should allow to model heart rate variability in sleep disorders. This possibility is briefly discussed.
Using Perspective to Model Complex Processes
Kelsey, R.L.; Bisset, K.R.
1999-04-04
The notion of perspective, when supported in an object-based knowledge representation, can facilitate better abstractions of reality for modeling and simulation. The object modeling of complex physical and chemical processes is made more difficult in part due to the poor abstractions of state and phase changes available in these models. The notion of perspective can be used to create different views to represent the different states of matter in a process. These techniques can lead to a more understandable model. Additionally, the ability to record the progress of a process from start to finish is problematic. It is desirable to have a historic record of the entire process, not just the end result of the process. A historic record should facilitate backtracking and re-start of a process at different points in time. The same representation structures and techniques can be used to create a sequence of process markers to represent a historic record. By using perspective, the sequence of markers can have multiple and varying views tailored for a particular user's context of interest.
Rössler, Ole; Bosshard, Thomas; Weingartner, Rolf
2016-04-01
A key issue for adaptation planning is the information of projections about changes of extremes. Climate projections of meteorological extremes and their downscaling are a challenge on their own. Yet - at least in hydrology - meteorological extremes are not necessarily hydrological extremes. These can also result from a sequence of days with only moderate meteorological conditions, too. This sequences are called "storylines". In climate change impact assess studies it is relevant to know, whether these meteorological storylines are represented in regional climate models, and how well can bias correction preserve or improve the representation. One storyline leading to hydrological extremes are rain-on-snow events, and more specifically rain-on-snowfall events. These events challenge the regional climate model and the bias correction in terms of representing absolute values and inter-variable dependences. This study makes use of the rain-on-snow-storylines to evaluate the performance of regional climate models and a bias correction method in reproducing complex inter-variable dependencies. At first, we applied a hydrological model to a mesoscale catchment in Switzerland that is known to be effected by rain-on-snow events. At second, the ERA-Interim driven regional climate model RCA4.5 - developed at SMHI - with a spatial resolution of 0.11 * 0.11 degree was used to drive the hydrological model. At third, bias-correction of the RCM was done applying the distribution based scaling (DBS) bias-correction method (Yang et al., 2010) developed at the SMHI. The bias-corrected data then also served as driving input data to the hydrological model. Based on the simulated runoff, as well as simulated precipitation, temperature, and snow pack data, an algorithm to detect rain-on-snow events was applied. Finally, the presence or absents of rain-on-snow events for the three different climate input data, ERA.RCA4.5, DBS corrected ERA.RC4 and observed climate, are evaluated within
Selected topics in the classical theory of functions of a complex variable
Heins, Maurice
2014-01-01
Elegant and concise, this text is geared toward advanced undergraduate students acquainted with the theory of functions of a complex variable. The treatment presents such students with a number of important topics from the theory of analytic functions that may be addressed without erecting an elaborate superstructure. These include some of the theory's most celebrated results, which seldom find their way into a first course. After a series of preliminaries, the text discusses properties of meromorphic functions, the Picard theorem, and harmonic and subharmonic functions. Subsequent topics incl
SPACES OF ANALYTIC FUNCTIONS REPRESENTED BY DIRICHLET SERIES OF TWO COMPLEX VARIABLES
HazemShabaBehnam; G.S.Srivastava
2002-01-01
We consider the space X of all analytic functions f(s1,s2)=∑∞m，n=1 amnexp(s1λm+s2μn) of two complex variables s1 and s2,equipping it with the natural locally convex topology and using the growth parameter,the order of f as defined recently by the authors.Under this topology X becomes a Frechet space.Apart from finding the characterization of continuous linear functionals,linear transformation on X,we have obtained the necessary and sufficient conditions for a double sequence in X to be a proper bases.
SPACES OF ANALYTIC FUNCTIONS REPRESENTED BY DIRICHLET SERIES OF TWo COMPLEX VARIABLES
Hazem Shaba Behnam; G.S. Srivastava
2002-01-01
We consider the space X of all analytic functionsf(s1 ,s2) = ∞∑aminexp(s1λm+s2μtn)of two complex variables s1 and s2, equipping it with the natural locally convex topology and using thegrowth parmeter, the order of f as defined recently by the authors. Under this topology X becomes aFrechet space. Apart from finding the characterization of continuous linear functiors, linear transforma-tion on X, we have obtained the necesary and sufficient conditions for a double sequence in X to be a properbases.
The several transformation formula in several complex variables and its applications
无
2010-01-01
In this paper,by the method of global analysis,the authors give a new global integral transformation formula and obtain the Plemelj formula with Hadamard principal value of higher-order partial derivatives for the integral of Bochner-Martinelli type on a closed piecewise smooth orientable manifold Cn.Moreover,the authors obtain the composition formula,Poincar’e-Bertrand extended formula of the corresponding singular integral.As the application of some results,the authors also study a higher-order Cauchy boundary problem and a regularization problem of higher-order linear complex differential singular integral equation with variable coefficients.
On hydrological model complexity, its geometrical interpretations and prediction uncertainty
Arkesteijn, E.C.M.M.; Pande, S.
2013-01-01
Knowledge of hydrological model complexity can aid selection of an optimal prediction model out of a set of available models. Optimal model selection is formalized as selection of the least complex model out of a subset of models that have lower empirical risk. This may be considered equivalent to
Robust Structural Equation Modeling with Missing Data and Auxiliary Variables
Yuan, Ke-Hai; Zhang, Zhiyong
2012-01-01
The paper develops a two-stage robust procedure for structural equation modeling (SEM) and an R package "rsem" to facilitate the use of the procedure by applied researchers. In the first stage, M-estimates of the saturated mean vector and covariance matrix of all variables are obtained. Those corresponding to the substantive variables…
Optical Test of Local Hidden-Variable Model
WU XiaoHua; ZONG HongShi; PANG HouRong
2001-01-01
An inequality is deduced from local realism and a supplementary assumption. This inequality defines an experiment that can be actually performed with the present technology to test local hidden-variable models, and it is violated by quantum mechanics with a factor 1.92, while it can be simplified into a form where just two measurements are required.``
Environmental Concern and Sociodemographic Variables: A Study of Statistical Models
Xiao, Chenyang; McCright, Aaron M.
2007-01-01
Studies of the social bases of environmental concern over the past 30 years have produced somewhat inconsistent results regarding the effects of sociodemographic variables, such as gender, income, and place of residence. The authors argue that model specification errors resulting from violation of two statistical assumptions (interval-level…
Multiple Imputation of Predictor Variables Using Generalized Additive Models
de Jong, Roel; van Buuren, Stef; Spiess, Martin
2016-01-01
The sensitivity of multiple imputation methods to deviations from their distributional assumptions is investigated using simulations, where the parameters of scientific interest are the coefficients of a linear regression model, and values in predictor variables are missing at random. The performanc
Modeling quasi-static magnetohydrodynamic turbulence with variable energy flux
Verma, Mahendra K
2014-01-01
In quasi-static MHD, experiments and numerical simulations reveal that the energy spectrum is steeper than Kolmogorov's $k^{-5/3}$ spectrum. To explain this observation, we construct turbulence models based on variable energy flux, which is caused by the Joule dissipation. In the first model, which is applicable to small interaction parameters, the energy spectrum is a power law, but with a spectral exponent steeper than -5/3. In the other limit of large interaction parameters, the second model predicts an exponential energy spectrum and flux. The model predictions are in good agreement with the numerical results.
Five-Dimensional Cosmological Model with Variable G and Λ
H. Baysal; (I). Yilmaz
2007-01-01
@@ Einstein's field equations with G and Λ both varying with time are considered in the presence of a perfect fluid for five-dimensional cosmological model in a way which conserves the energy momentum tensor of the matter content. Several sets of explicit solutions in the five-dimensional Kaluza-Klein type cosmological models with variable G and Λ are obtained. The diminishment of extra dimension with the evolution of the universe for the five-dimensional model is exhibited. The physical properties of the models are examined.
Hidden variable models for quantum mechanics can have local parts
Larsson, Jan-Ake
2009-01-01
We present an explicit nonlocal nonsignaling model which has a nontrivial local part and is compatible with quantum mechanics. This model constitutes a counterexample to Colbeck and Renner's statement [Phys. Rev. Lett. 101, 050403 (2008)] that "any hidden variable model can only be compatible with quantum mechanics if its local part is trivial". Furthermore, we examine Colbeck and Renner's definition of "local part" and find that, in the case of models reproducing the quantum predictions for the singlet state, it is a restriction equivalent to the conjunction of nonsignaling and trivial local part.
Stability Analysis of a Variable Meme Transmission Model
Reem Al-Amoudi; Salma Al-Tuwairqi; Sarah Al-Sheikh
2014-01-01
Memes propagation is a usual form of social interaction. Understanding the dynamics of memes transmission enables one to find the conditions that leads to persistence or disappearance of memes. In this paper we analyze qualitatively a mathematical model of variable meme transmission. Two equilibrium points of the model are examined: meme free equilibrium and meme existence equilibrium. The reproduction number R₀ that generates new memes is found. Local and global stability of the equilibrium ...
a Model Study of Complex Behavior in the Belousov - Reaction.
Lindberg, David Mark
1988-12-01
We have studied the complex oscillatory behavior in a model of the Belousov-Zhabotinskii (BZ) reaction in a continuously-fed stirred tank reactor (CSTR). The model consisted of a set of nonlinear ordinary differential equations derived from a reduced mechanism of the chemical system. These equations were integrated numerically on a computer, which yielded the concentrations of the constituent chemicals as functions of time. In addition, solutions were tracked as functions of a single parameter, the stability of the solutions was determined, and bifurcations of the solutions were located and studied. The intent of this study was to use this BZ model to explore further a region of complex oscillatory behavior found in experimental investigations, the most thorough of which revealed an alternating periodic-chaotic (P-C) sequence of states. A P-C sequence was discovered in the model which showed the same qualitative features as the experimental sequence. In order to better understand the P-C sequence, a detailed study was conducted in the vicinity of the P-C sequence, with two experimentally accessible parameters as control variables. This study mapped out the bifurcation sets, and included examination of the dynamics of the stable periodic, unstable periodic, and chaotic oscillatory motion. Observations made from the model results revealed a rough symmetry which suggests a new way of looking at the P-C sequence. Other nonlinear phenomena uncovered in the model were boundary and interior crises, several codimension-two bifurcations, and similarities in the shapes of areas of stability for periodic orbits in two-parameter space. Each earlier model study of this complex region involved only a limited one-parameter scan and had limited success in producing agreement with experiments. In contrast, for those regions of complex behavior that have been studied experimentally, the observations agree qualitatively with our model results. Several new predictions of the model
Variable bit rate video traffic modeling by multiplicative multifractal model
Huang Xiaodong; Zhou Yuanhua; Zhang Rongfu
2006-01-01
Multiplicative multifractal process could well model video traffic. The multiplier distributions in the multiplicative multifractal model for video traffic are investigated and it is found that Gaussian is not suitable for describing the multipliers on the small time scales. A new statistical distribution-symmetric Pareto distribution is introduced. It is applied instead of Gaussian for the multipliers on those scales. Based on that, the algorithm is updated so that symmetric pareto distribution and Gaussian distribution are used to model video traffic but on different time scales. The simulation results demonstrate that the algorithm could model video traffic more accurately.
Pinnola, Francesco Paolo
2016-10-01
The statistical characterization of the oscillator response with non-integer order damping under Gaussian noise represents an important challenge in the modern stochastic mechanics. In fact, this kind of problem appears in several issues of different type (wave propagation in viscoelastic media, Brownian motion, fluid dynamics, RLC circuit, etc.). The aim of this paper is to provide a stochastic characterization of the stationary response of linear fractional oscillator forced by normal white noise. In particular, this paper shows a new method to obtain the correlation function by exact complex spectral moments. These complex quantities contain all the information to describe the random processes but in the considered case their analytical evaluation needs some mathematical manipulations. For this reason such complex spectral moment characterization is used in conjunction with a fractional-order state variable analysis. This kind of analysis permits to find the exact expression of complex spectral moments, and the correlation function by using the Mellin transform. Moreover, the proposed approach provides an analytical expression of the response variance of the fractional oscillator. Capability and efficiency of the present method are shown in the numerical examples in which correlation and variance of fractional oscillator response are found and compared with those obtained by Monte Carlo simulations.
Complex Variable Methods for 3D Applied Mathematics: 3D Twistors and the biharmonic equation
Shaw, William T
2010-01-01
In applied mathematics generally and fluid dynamics in particular, the role of complex variable methods is normally confined to two-dimensional motion and the association of points with complex numbers via the assignment w = x+i y. In this framework 2D potential flow can be treated through the use of holomorphic functions and biharmonic flow through a simple, but superficially non-holomorphic extension. This paper explains how to elevate the use of complex methods to three dimensions, using Penrose's theory of twistors as adapted to intrinsically 3D and non-relativistic problems by Hitchin. We first summarize the equations of 3D steady viscous fluid flow in their basic geometric form. We then explain the theory of twistors for 3D, resulting in complex holomorphic representations of solutions to harmonic and biharmonic problems. It is shown how this intrinsically holomorphic 3D approach reduces naturally to the well-known 2D situations when there is translational or rotational symmetry, and an example is given...
A metric for attributing variability in modelled streamflows
Shoaib, Syed Abu; Marshall, Lucy; Sharma, Ashish
2016-10-01
Significant gaps in our present understanding of hydrological systems lead to enhanced uncertainty in key modelling decisions. This study proposes a method, namely "Quantile Flow Deviation (QFD)", for the attribution of forecast variability to different sources across different streamflow regimes. By using a quantile based metric, we can assess the change in uncertainty across individual percentiles, thereby allowing uncertainty to be expressed as a function of magnitude and time. As a result, one can address selective sources of uncertainty depending on whether low or high flows (say) are of interest. By way of a case study, we demonstrate the usefulness of the approach for estimating the relative importance of model parameter identification, objective functions and model structures as sources of stream flow forecast uncertainty. We use FUSE (Framework for Understanding Structural Errors) to implement our methods, allowing selection of multiple different model structures. Cross-catchment comparison is done for two different catchments: Leaf River in Mississippi, USA and Bass River of Victoria, Australia. Two different approaches to parameter estimation are presented that demonstrate the statistic- one based on GLUE, the other one based on optimization. The results presented in this study suggest that the determination of the model structure with the design catchment should be given priority but that objective function selection with parameter identifiability can lead to significant variability in results. By examining the QFD across multiple flow quantiles, the ability of certain models and optimization routines to constrain variability for different flow conditions is demonstrated.
Sparse modeling of spatial environmental variables associated with asthma.
Chang, Timothy S; Gangnon, Ronald E; David Page, C; Buckingham, William R; Tandias, Aman; Cowan, Kelly J; Tomasallo, Carrie D; Arndt, Brian G; Hanrahan, Lawrence P; Guilbert, Theresa W
2015-02-01
Geographically distributed environmental factors influence the burden of diseases such as asthma. Our objective was to identify sparse environmental variables associated with asthma diagnosis gathered from a large electronic health record (EHR) dataset while controlling for spatial variation. An EHR dataset from the University of Wisconsin's Family Medicine, Internal Medicine and Pediatrics Departments was obtained for 199,220 patients aged 5-50years over a three-year period. Each patient's home address was geocoded to one of 3456 geographic census block groups. Over one thousand block group variables were obtained from a commercial database. We developed a Sparse Spatial Environmental Analysis (SASEA). Using this method, the environmental variables were first dimensionally reduced with sparse principal component analysis. Logistic thin plate regression spline modeling was then used to identify block group variables associated with asthma from sparse principal components. The addresses of patients from the EHR dataset were distributed throughout the majority of Wisconsin's geography. Logistic thin plate regression spline modeling captured spatial variation of asthma. Four sparse principal components identified via model selection consisted of food at home, dog ownership, household size, and disposable income variables. In rural areas, dog ownership and renter occupied housing units from significant sparse principal components were associated with asthma. Our main contribution is the incorporation of sparsity in spatial modeling. SASEA sequentially added sparse principal components to Logistic thin plate regression spline modeling. This method allowed association of geographically distributed environmental factors with asthma using EHR and environmental datasets. SASEA can be applied to other diseases with environmental risk factors.
Alternative cokriging model for variable-fidelity surrogate modeling
Han, Zhong Hua; Zimmermann, Ralf; Goertz, Stefan
2012-01-01
to construct global approximation models of the aerodynamic coefficients as well as the drag polar of an RAE 2822 airfoil. The kriging and cokriging models for the moment coefficient show that the poor space-filling properties of the quasi Monte Carlo sampling of the RANS simulations leaves a noticeable gap...
Complex principal component and correlation structure of 16 yeast genomic variables.
Theis, Fabian J; Latif, Nadia; Wong, Philip; Frishman, Dmitrij
2011-09-01
A quickly growing number of characteristics reflecting various aspects of gene function and evolution can be either measured experimentally or computed from DNA and protein sequences. The study of pairwise correlations between such quantitative genomic variables as well as collective analysis of their interrelations by multidimensional methods have delivered crucial insights into the processes of molecular evolution. Here, we present a principal component analysis (PCA) of 16 genomic variables from Saccharomyces cerevisiae, the largest data set analyzed so far. Because many missing values and potential outliers hinder the direct calculation of principal components, we introduce the application of Bayesian PCA. We confirm some of the previously established correlations, such as evolutionary rate versus protein expression, and reveal new correlations such as those between translational efficiency, phosphorylation density, and protein age. Although the first principal component primarily contrasts genomic change and protein expression, the second component separates variables related to gene existence and expressed protein functions. Enrichment analysis on genes affecting variable correlations unveils classes of influential genes. For example, although ribosomal and nuclear transport genes make important contributions to the correlation between protein isoelectric point and molecular weight, protein synthesis and amino acid metabolism genes help cause the lack of significant correlation between propensity for gene loss and protein age. We present the novel Quagmire database (Quantitative Genomics Resource) which allows exploring relationships between more genomic variables in three model organisms-Escherichia coli, S. cerevisiae, and Homo sapiens (http://webclu.bio.wzw.tum.de:18080/quagmire).
Ants (Formicidae): models for social complexity.
Smith, Chris R; Dolezal, Adam; Eliyahu, Dorit; Holbrook, C Tate; Gadau, Jürgen
2009-07-01
The family Formicidae (ants) is composed of more than 12,000 described species that vary greatly in size, morphology, behavior, life history, ecology, and social organization. Ants occur in most terrestrial habitats and are the dominant animals in many of them. They have been used as models to address fundamental questions in ecology, evolution, behavior, and development. The literature on ants is extensive, and the natural history of many species is known in detail. Phylogenetic relationships for the family, as well as within many subfamilies, are known, enabling comparative studies. Their ease of sampling and ecological variation makes them attractive for studying populations and questions relating to communities. Their sociality and variation in social organization have contributed greatly to an understanding of complex systems, division of labor, and chemical communication. Ants occur in colonies composed of tens to millions of individuals that vary greatly in morphology, physiology, and behavior; this variation has been used to address proximate and ultimate mechanisms generating phenotypic plasticity. Relatedness asymmetries within colonies have been fundamental to the formulation and empirical testing of kin and group selection theories. Genomic resources have been developed for some species, and a whole-genome sequence for several species is likely to follow in the near future; comparative genomics in ants should provide new insights into the evolution of complexity and sociogenomics. Future studies using ants should help establish a more comprehensive understanding of social life, from molecules to colonies.
Decomposition method of complex optimization model based on global sensitivity analysis
Qiu, Qingying; Li, Bing; Feng, Peien; Gao, Yu
2014-07-01
The current research of the decomposition methods of complex optimization model is mostly based on the principle of disciplines, problems or components. However, numerous coupling variables will appear among the sub-models decomposed, thereby make the efficiency of decomposed optimization low and the effect poor. Though some collaborative optimization methods are proposed to process the coupling variables, there lacks the original strategy planning to reduce the coupling degree among the decomposed sub-models when we start decomposing a complex optimization model. Therefore, this paper proposes a decomposition method based on the global sensitivity information. In this method, the complex optimization model is decomposed based on the principle of minimizing the sensitivity sum between the design functions and design variables among different sub-models. The design functions and design variables, which are sensitive to each other, will be assigned to the same sub-models as much as possible to reduce the impacts to other sub-models caused by the changing of coupling variables in one sub-model. Two different collaborative optimization models of a gear reducer are built up separately in the multidisciplinary design optimization software iSIGHT, the optimized results turned out that the decomposition method proposed in this paper has less analysis times and increases the computational efficiency by 29.6%. This new decomposition method is also successfully applied in the complex optimization problem of hydraulic excavator working devices, which shows the proposed research can reduce the mutual coupling degree between sub-models. This research proposes a decomposition method based on the global sensitivity information, which makes the linkages least among sub-models after decomposition, and provides reference for decomposing complex optimization models and has practical engineering significance.
Panduro, Toke Emil; Thorsen, Bo Jellesmark
2014-01-01
Hedonic models in environmental valuation studies have grown in terms of number of transactions and number of explanatory variables. We focus on the practical challenge of model reduction, when aiming for reliable parsimonious models, sensitive to omitted variable bias and multicollinearity. We...
Analysis models for variables associated with breastfeeding duration
Edson Theodoro dos S. Neto
2013-09-01
Full Text Available OBJECTIVE To analyze the factors associated with breastfeeding duration by two statistical models. METHODS A population-based cohort study was conducted with 86 mothers and newborns from two areas primary covered by the National Health System, with high rates of infant mortality in Vitória, Espírito Santo, Brazil. During 30 months, 67 (78% children and mothers were visited seven times at home by trained interviewers, who filled out survey forms. Data on food and sucking habits, socioeconomic and maternal characteristics were collected. Variables were analyzed by Cox regression models, considering duration of breastfeeding as the dependent variable, and logistic regression (dependent variables, was the presence of a breastfeeding child in different post-natal ages. RESULTS In the logistic regression model, the pacifier sucking (adjusted Odds Ratio: 3.4; 95%CI 1.2-9.55 and bottle feeding (adjusted Odds Ratio: 4.4; 95%CI 1.6-12.1 increased the chance of weaning a child before one year of age. Variables associated to breastfeeding duration in the Cox regression model were: pacifier sucking (adjusted Hazard Ratio 2.0; 95%CI 1.2-3.3 and bottle feeding (adjusted Hazard Ratio 2.0; 95%CI 1.2-3.5. However, protective factors (maternal age and family income differed between both models. CONCLUSIONS Risk and protective factors associated with cessation of breastfeeding may be analyzed by different models of statistical regression. Cox Regression Models are adequate to analyze such factors in longitudinal studies.
Sensitivity Analysis in a Complex Marine Ecological Model
Marcos D. Mateus
2015-05-01
Full Text Available Sensitivity analysis (SA has long been recognized as part of best practices to assess if any particular model can be suitable to inform decisions, despite its uncertainties. SA is a commonly used approach for identifying important parameters that dominate model behavior. As such, SA address two elementary questions in the modeling exercise, namely, how sensitive is the model to changes in individual parameter values, and which parameters or associated processes have more influence on the results. In this paper we report on a local SA performed on a complex marine biogeochemical model that simulates oxygen, organic matter and nutrient cycles (N, P and Si in the water column, and well as the dynamics of biological groups such as producers, consumers and decomposers. SA was performed using a “one at a time” parameter perturbation method, and a color-code matrix was developed for result visualization. The outcome of this study was the identification of key parameters influencing model performance, a particularly helpful insight for the subsequent calibration exercise. Also, the color-code matrix methodology proved to be effective for a clear identification of the parameters with most impact on selected variables of the model.
Realistic MHD Modelling of Cataclysmic Variable Spin-Down
Lascelles, Alex; Garraffo, Cecilia; Drake, Jeremy J.; Cohen, Ofer
2017-01-01
The orbital evolution of cataclysmic variables with periods above the "period gap" (>3 hrs) is governed by angular momentum loss via the magnetized wind of the unevolved secondary star. The usual prescription to study such systems takes into account only the magnetic field of the secondary and assumes its field is dipolar. It has been shown that introduction of the white dwarf and its magnetic field can significantly impact the wind’s structure, leading to a change in angular momentum loss rate and evolutionary timescale by an order of magnitude. Furthermore, the complexity of the magnetic field can drastically alter stellar spin-down rates. We explore the effects of orbital separation and magnetic field configuration on mass and angular momentum loss rates through 3-D magnetohydrodynamic simulations. We present the results of a study of cataclysmic variable orbital evolution including these new ingredients.
Solutions of two-factor models with variable interest rates
Li, Jinglu; Clemons, C. B.; Young, G. W.; Zhu, J.
2008-12-01
The focus of this work is on numerical solutions to two-factor option pricing partial differential equations with variable interest rates. Two interest rate models, the Vasicek model and the Cox-Ingersoll-Ross model (CIR), are considered. Emphasis is placed on the definition and implementation of boundary conditions for different portfolio models, and on appropriate truncation of the computational domain. An exact solution to the Vasicek model and an exact solution for the price of bonds convertible to stock at expiration under a stochastic interest rate are derived. The exact solutions are used to evaluate the accuracy of the numerical simulation schemes. For the numerical simulations the pricing solution is analyzed as the market completeness decreases from the ideal complete level to one with higher volatility of the interest rate and a slower mean-reverting environment. Simulations indicate that the CIR model yields more reasonable results than the Vasicek model in a less complete market.
Two-step variable selection in quantile regression models
FAN Yali
2015-06-01
Full Text Available We propose a two-step variable selection procedure for high dimensional quantile regressions,in which the dimension of the covariates, pn is much larger than the sample size n. In the first step, we perform l1 penalty, and we demonstrate that the first step penalized estimator with the LASSO penalty can reduce the model from an ultra-high dimensional to a model whose size has the same order as that of the true model, and the selected model can cover the true model. The second step excludes the remained irrelevant covariates by applying the adaptive LASSO penalty to the reduced model obtained from the first step. Under some regularity conditions, we show that our procedure enjoys the model selection consistency. We conduct a simulation study and a real data analysis to evaluate the finite sample performance of the proposed approach.
ASYMPTOTICS OF MEAN TRANSFORMATION ESTIMATORS WITH ERRORS IN VARIABLES MODEL
CUI Hengjian
2005-01-01
This paper addresses estimation and its asymptotics of mean transformation θ = E[h(X)] of a random variable X based on n iid. Observations from errors-in-variables model Y = X + v, where v is a measurement error with a known distribution and h(.) is a known smooth function. The asymptotics of deconvolution kernel estimator for ordinary smooth error distribution and expectation extrapolation estimator are given for normal error distribution respectively. Under some mild regularity conditions, the consistency and asymptotically normality are obtained for both type of estimators. Simulations show they have good performance.
Multiple Discrete Endogenous Variables in Weakly-Separable Triangular Models
Sung Jae Jun
2016-02-01
Full Text Available We consider a model in which an outcome depends on two discrete treatment variables, where one treatment is given before the other. We formulate a three-equation triangular system with weak separability conditions. Without assuming assignment is random, we establish the identification of an average structural function using two-step matching. We also consider decomposing the effect of the first treatment into direct and indirect effects, which are shown to be identified by the proposed methodology. We allow for both of the treatment variables to be non-binary and do not appeal to an identification-at-infinity argument.
A Review of Variable Slicing in Fused Deposition Modeling
Nadiyapara, Hitesh Hirjibhai; Pande, Sarang
2016-06-01
The paper presents a literature survey in the field of fused deposition of plastic wires especially in the field of slicing and deposition using extrusion of thermoplastic wires. Various researchers working in the field of computation of deposition path have used their algorithms for variable slicing. In the study, a flowchart has also been proposed for the slicing and deposition process. The algorithm already been developed by previous researcher will be used to be implemented on the fused deposition modelling machine. To demonstrate the capabilities of the fused deposition modeling machine a case study has been taken. It uses a manipulated G-code to be fed to the fused deposition modeling machine. Two types of slicing strategies, namely uniform slicing and variable slicing have been evaluated. In the uniform slicing, the slice thickness has been used for deposition is varying from 0.1 to 0.4 mm. In the variable slicing, thickness has been varied from 0.1 in the polar region to 0.4 in the equatorial region Time required and the number of slices required to deposit a hemisphere of 20 mm diameter have been compared with that using the variable slicing.
Recurrence-plot-based measures of complexity and their application to heart-rate-variability data.
Marwan, Norbert; Wessel, Niels; Meyerfeldt, Udo; Schirdewan, Alexander; Kurths, Jürgen
2002-08-01
The knowledge of transitions between regular, laminar or chaotic behaviors is essential to understand the underlying mechanisms behind complex systems. While several linear approaches are often insufficient to describe such processes, there are several nonlinear methods that, however, require rather long time observations. To overcome these difficulties, we propose measures of complexity based on vertical structures in recurrence plots and apply them to the logistic map as well as to heart-rate-variability data. For the logistic map these measures enable us not only to detect transitions between chaotic and periodic states, but also to identify laminar states, i.e., chaos-chaos transitions. The traditional recurrence quantification analysis fails to detect the latter transitions. Applying our measures to the heart-rate-variability data, we are able to detect and quantify the laminar phases before a life-threatening cardiac arrhythmia occurs thereby facilitating a prediction of such an event. Our findings could be of importance for the therapy of malignant cardiac arrhythmias.
Recurrence-plot-based measures of complexity and their application to heart-rate-variability data
Marwan, Norbert; Wessel, Niels; Meyerfeldt, Udo; Schirdewan, Alexander; Kurths, Jürgen
2002-08-01
The knowledge of transitions between regular, laminar or chaotic behaviors is essential to understand the underlying mechanisms behind complex systems. While several linear approaches are often insufficient to describe such processes, there are several nonlinear methods that, however, require rather long time observations. To overcome these difficulties, we propose measures of complexity based on vertical structures in recurrence plots and apply them to the logistic map as well as to heart-rate-variability data. For the logistic map these measures enable us not only to detect transitions between chaotic and periodic states, but also to identify laminar states, i.e., chaos-chaos transitions. The traditional recurrence quantification analysis fails to detect the latter transitions. Applying our measures to the heart-rate-variability data, we are able to detect and quantify the laminar phases before a life-threatening cardiac arrhythmia occurs thereby facilitating a prediction of such an event. Our findings could be of importance for the therapy of malignant cardiac arrhythmias.
THREE-DIMENSIONAL VARIABLES ALLOCATION IN MESOSCALE MODELS
刘宇迪; 陆汉城
2004-01-01
Forecasts and simulations are varied owing to different allocation of 3-dimensional variables in mesoscale models. No attempts have been made to address the issue of optimizing the simulation with a 3-dimensional variables distribution that should come to be. On the basis of linear nonhydrostatic anelastic equations, the paper hereby compares, mainly graphically, the computational dispersion with analytical solutions for four kinds of 3-dimensional meshes commonly found in mesoscale models, in terms of frequency, horizontal and vertical group velocities. The result indicates that the 3-D mesh C/CP has the best computational dispersion, followed by Z/LZ and Z/LY, with the C/L having the worst performance. It is then known that the C/CP mesh is the most desirable allocation in the design of nonhydrostatic baroclinic models. The mesh has, however, larger errors when dealing with shorter horizontal wavelengths. For the simulation of smaller horizontal scales, the horizontal grid intervals have to be shortened to reduce the errors. Additionally, in view of the dominant use of C/CP mesh in finite-difference models, it should be used in conjunction with the Z/LZ or Z/LY mesh if variables are allocated in spectral models.
Quantum Computing and Hidden Variables II: The Complexity of Sampling Histories
Aaronson, S
2004-01-01
This paper shows that, if we could examine the entire history of a hidden variable, then we could efficiently solve problems that are believed to be intractable even for quantum computers. In particular, under any hidden-variable theory satisfying a reasonable axiom called "indifference to the identity," we could solve the Graph Isomorphism and Approximate Shortest Vector problems in polynomial time, as well as an oracle problem that is known to require quantum exponential time. We could also search an N-item database using O(N^{1/3}) queries, as opposed to O(N^{1/2}) queries with Grover's search algorithm. On the other hand, the N^{1/3} bound is optimal, meaning that we could probably not solve NP-complete problems in polynomial time. We thus obtain the first good example of a model of computation that appears slightly more powerful than the quantum computing model.
Nonlinear Dynamical Modeling and Forecast of ENSO Variability
Feigin, Alexander; Mukhin, Dmitry; Gavrilov, Andrey; Seleznev, Aleksey; Loskutov, Evgeny
2017-04-01
New methodology of empirical modeling and forecast of nonlinear dynamical system variability [1] is applied to study of ENSO climate system. The methodology is based on two approaches: (i) nonlinear decomposition of data [2], that provides low-dimensional embedding for further modeling, and (ii) construction of empirical model in the form of low dimensional random dynamical ("stochastic") system [3]. Three monthly data sets are used for ENSO modeling and forecast: global sea surface temperature anomalies, troposphere zonal wind speed, and thermocline depth; all data sets are limited by 30 S, 30 N and have horizontal resolution 10x10 . We compare results of optimal data decomposition as well as prognostic skill of the constructed models for different combinations of involved data sets. We also present comparative analysis of ENSO indices forecasts fulfilled by our models and by IRI/CPC ENSO Predictions Plume. [1] A. Gavrilov, D. Mukhin, E. Loskutov, A. Feigin, 2016: Construction of Optimally Reduced Empirical Model by Spatially Distributed Climate Data. 2016 AGU Fall Meeting, Abstract NG31A-1824. [2] D. Mukhin, A. Gavrilov, E. Loskutov , A.Feigin, J.Kurths, 2015: Principal nonlinear dynamical modes of climate variability, Scientific Reports, rep. 5, 15510; doi: 10.1038/srep15510. [3] Ya. Molkov, D. Mukhin, E. Loskutov, A. Feigin, 2012: Random dynamical models from time series. Phys. Rev. E, Vol. 85, n.3.
Model atmospheres with periodic shocks. [pulsations and mass loss in variable stars
Bowen, G. H.
1989-01-01
The pulsation of a long-period variable star generates shock waves which dramatically affect the structure of the star's atmosphere and produce conditions that lead to rapid mass loss. Numerical modeling of atmospheres with periodic shocks is being pursued to study the processes involved and the evolutionary consequences for the stars. It is characteristic of these complex dynamical systems that most effects result from the interaction of various time-dependent processes.
Phase-field modeling of fracture in variably saturated porous media
Cajuhi, T.; Sanavia, L.; De Lorenzis, L.
2017-08-01
We propose a mechanical and computational model to describe the coupled problem of poromechanics and cracking in variably saturated porous media. A classical poromechanical formulation is adopted and coupled with a phase-field formulation for the fracture problem. The latter has the advantage of being able to reproduce arbitrarily complex crack paths without introducing discontinuities on a fixed mesh. The obtained simulation results show good qualitative agreement with desiccation experiments on soils from the literature.
Model atmospheres with periodic shocks. [pulsations and mass loss in variable stars
Bowen, G. H.
1989-01-01
The pulsation of a long-period variable star generates shock waves which dramatically affect the structure of the star's atmosphere and produce conditions that lead to rapid mass loss. Numerical modeling of atmospheres with periodic shocks is being pursued to study the processes involved and the evolutionary consequences for the stars. It is characteristic of these complex dynamical systems that most effects result from the interaction of various time-dependent processes.
Multi-scale variability of winds in the complex topography of southwestern Norway
Marius O. Jonassen
2012-01-01
Full Text Available Multi-scale variability of winds in the complex terrain of southwestern Norway is investigated using up to 20 yr of observations from nine automatic weather stations and reanalysis data. Significant differences between the large- and local-scale winds are found. These differences are mainly governed by the large-scale topography of Southern Norway. Winds from the southeast and statically stable flow from the northwest are found to be significantly reduced at the ground level due to large-scale wake and blocking effects. Southwesterly and northeasterly winds are orographically enhanced. At a local scale, there are differences in the wind speed distributions between the surface stations, both in space and time. These differences can to a large extent be quantified in terms of the Weibull distribution function and associated with the respective geographical locations as discretised in four characteristic surface categories: offshore, inland, coast and mountain. The inland category is found to be associated with relatively low but variable wind speeds, whereas the coastal and offshore locations are dominated by more steady and stronger winds. The mountain wind speed distribution is fundamentally different from the others; it shares the variability with the inland locations but the higher average wind speed with the other categories.
Modelling internal boundary-layer development in a region with a complex coastline
Batchvarova, E.; Cai, X.; Gryning, Sven-Erik
1999-01-01
The purpose of this paper is to test the ability of two quite different models to simulate the combined spatial and temporal variability of the internal boundary layer in an area of complex terrain and coastline during one day. The simple applied slab model of Gryning and Batchvarova, and the Col...
Decadal Variability of Clouds and Comparison with Climate Model Simulations
Su, H.; Shen, T. J.; Jiang, J. H.; Yung, Y. L.
2014-12-01
An apparent climate regime shift occurred around 1998/1999, when the steady increase of global-mean surface temperature appeared to hit a hiatus. Coherent decadal variations are found in atmospheric circulation and hydrological cycles. Using 30-year cloud observations from the International Satellite Cloud Climatology Project, we examine the decadal variability of clouds and associated cloud radiative effects on surface warming. Empirical Orthogonal Function analysis is performed. After removing the seasonal cycle and ENSO signal in the 30-year data, we find that the leading EOF modes clearly represent a decadal variability in cloud fraction, well correlated with the indices of Pacific Decadal Oscillation (PDO) and Atlantic Multidecadal Oscillation (AMO). The cloud radiative effects associated with decadal variations of clouds suggest a positive cloud feedback, which would reinforce the global warming hiatus by a net cloud cooling after 1998/1999. Climate model simulations driven by observed sea surface temperature are compared with satellite observed cloud decadal variability. Copyright:
Variable Star Signature Classification using Slotted Symbolic Markov Modeling
Johnston, Kyle B
2016-01-01
With the advent of digital astronomy, new benefits and new challenges have been presented to the modern day astronomer. No longer can the astronomer rely on manual processing, instead the profession as a whole has begun to adopt more advanced computational means. This paper focuses on the construction and application of a novel time-domain signature extraction methodology and the development of a supporting supervised pattern classification algorithm for the identification of variable stars. A methodology for the reduction of stellar variable observations (time-domain data) into a novel feature space representation is introduced. The methodology presented will be referred to as Slotted Symbolic Markov Modeling (SSMM) and has a number of advantages which will be demonstrated to be beneficial; specifically to the supervised classification of stellar variables. It will be shown that the methodology outperformed a baseline standard methodology on a standardized set of stellar light curve data. The performance on ...
Wang, Lijuan; Yan, Yong; Wang, Xue; Wang, Tao
2017-03-01
Input variable selection is an essential step in the development of data-driven models for environmental, biological and industrial applications. Through input variable selection to eliminate the irrelevant or redundant variables, a suitable subset of variables is identified as the input of a model. Meanwhile, through input variable selection the complexity of the model structure is simplified and the computational efficiency is improved. This paper describes the procedures of the input variable selection for the data-driven models for the measurement of liquid mass flowrate and gas volume fraction under two-phase flow conditions using Coriolis flowmeters. Three advanced input variable selection methods, including partial mutual information (PMI), genetic algorithm-artificial neural network (GA-ANN) and tree-based iterative input selection (IIS) are applied in this study. Typical data-driven models incorporating support vector machine (SVM) are established individually based on the input candidates resulting from the selection methods. The validity of the selection outcomes is assessed through an output performance comparison of the SVM based data-driven models and sensitivity analysis. The validation and analysis results suggest that the input variables selected from the PMI algorithm provide more effective information for the models to measure liquid mass flowrate while the IIS algorithm provides a fewer but more effective variables for the models to predict gas volume fraction.
Analytical models for complex swirling flows
Borissov, A.; Hussain, V.
1996-11-01
We develops a new class of analytical solutions of the Navier-Stokes equations for swirling flows, and suggests ways to predict and control such flows occurring in various technological applications. We view momentum accumulation on the axis as a key feature of swirling flows and consider vortex-sink flows on curved axisymmetric surfaces with an axial flow. We show that these solutions model swirling flows in a cylindrical can, whirlpools, tornadoes, and cosmic swirling jets. The singularity of these solutions on the flow axis is removed by matching them with near-axis Schlichting and Long's swirling jets. The matched solutions model flows with very complex patterns, consisting of up to seven separation regions with recirculatory 'bubbles' and vortex rings. We apply the matched solutions for computing flows in the Ranque-Hilsch tube, in the meniscus of electrosprays, in vortex breakdown, and in an industrial vortex burner. The simple analytical solutions allow a clear understanding of how different control parameters affect the flow and guide selection of optimal parameter values for desired flow features. These solutions permit extension to other problems (such as heat transfer and chemical reaction) and have the potential of being significantly useful for further detailed investigation by direct or large-eddy numerical simulations as well as laboratory experimentation.
Discrete Element Modeling of Complex Granular Flows
Movshovitz, N.; Asphaug, E. I.
2010-12-01
Granular materials occur almost everywhere in nature, and are actively studied in many fields of research, from food industry to planetary science. One approach to the study of granular media, the continuum approach, attempts to find a constitutive law that determines the material's flow, or strain, under applied stress. The main difficulty with this approach is that granular systems exhibit different behavior under different conditions, behaving at times as an elastic solid (e.g. pile of sand), at times as a viscous fluid (e.g. when poured), or even as a gas (e.g. when shaken). Even if all these physics are accounted for, numerical implementation is made difficult by the wide and often discontinuous ranges in continuum density and sound speed. A different approach is Discrete Element Modeling (DEM). Here the goal is to directly model every grain in the system as a rigid body subject to various body and surface forces. The advantage of this method is that it treats all of the above regimes in the same way, and can easily deal with a system moving back and forth between regimes. But as a granular system typically contains a multitude of individual grains, the direct integration of the system can be very computationally expensive. For this reason most DEM codes are limited to spherical grains of uniform size. However, spherical grains often cannot replicate the behavior of real world granular systems. A simple pile of spherical grains, for example, relies on static friction alone to keep its shape, while in reality a pile of irregular grains can maintain a much steeper angle by interlocking force chains. In the present study we employ a commercial DEM, nVidia's PhysX Engine, originally designed for the game and animation industry, to simulate complex granular flows with irregular, non-spherical grains. This engine runs as a multi threaded process and can be GPU accelerated. We demonstrate the code's ability to physically model granular materials in the three regimes
Variable Density Effects in Stochastic Lagrangian Models for Turbulent Combustion
2016-07-20
PDF methods have proven useful in modelling turbulent combustion, primarily because convection and complex reactions can be treated without the need...modelled transport equation fir the joint PDF of velocity, turbulent frequency and composition (species mass fractions and enthalpy ). The advantages of...PDF methods in dealing with chemical reaction and convection are preserved irrespective of density variation. Since the density variation in a typical
A new approach for modelling variability in residential construction projects
Mehrdad Arashpour
2013-06-01
Full Text Available The construction industry is plagued by long cycle times caused by variability in the supply chain. Variations or undesirable situations are the result of factors such as non-standard practices, work site accidents, inclement weather conditions and faults in design. This paper uses a new approach for modelling variability in construction by linking relative variability indicators to processes. Mass homebuilding sector was chosen as the scope of the analysis because data is readily available. Numerous simulation experiments were designed by varying size of capacity buffers in front of trade contractors, availability of trade contractors, and level of variability in homebuilding processes. The measurements were shown to lead to an accurate determination of relationships between these factors and production parameters. The variability indicator was found to dramatically affect the tangible performance measures such as home completion rates. This study provides for future analysis of the production homebuilding sector, which may lead to improvements in performance and a faster product delivery to homebuyers.
A new approach for modelling variability in residential construction projects
Mehrdad Arashpour
2013-06-01
Full Text Available The construction industry is plagued by long cycle times caused by variability in the supply chain. Variations or undesirable situations are the result of factors such as non-standard practices, work site accidents, inclement weather conditions and faults in design. This paper uses a new approach for modelling variability in construction by linking relative variability indicators to processes. Mass homebuilding sector was chosen as the scope of the analysis because data is readily available. Numerous simulation experiments were designed by varying size of capacity buffers in front of trade contractors, availability of trade contractors, and level of variability in homebuilding processes. The measurements were shown to lead to an accurate determination of relationships between these factors and production parameters. The variability indicator was found to dramatically affect the tangible performance measures such as home completion rates. This study provides for future analysis of the production homebuilding sector, which may lead to improvements in performance and a faster product delivery to homebuyers.
Zhilkin, A G; Mason, P A; 10.1134/S1063772912040087
2012-01-01
We performed 3D MHD calculations of stream accretion in cataclysmic variable stars for which the white dwarf primary star possesses a strong and complex magnetic field. These calculations are motivated by observations of polars; cataclysmic variables containing white dwarfs with magnetic fields sufficiently strong to prevent the formation of an accretion disk. So an accretion stream flows from the L1 point and impacts directly onto one or more spots on the surface of the white dwarf. Observations indicate that the white dwarf, in some binaries, possesses a complex (non-dipolar) magnetic field. We perform simulations of 10 polars or equivalently one asynchronous polar at 10 different beat phases. Our models have an aligned dipole plus quadrupole magnetic field centered on the white dwarf primary. We find that for a sufficiently strong quadrupole component an accretion spot occurs near the magnetic equator for slightly less than half of our simulations while a polar accretion zone is active for most of the rest...
Modeling temporal and spatial variability of crop yield
Bonetti, S.; Manoli, G.; Scudiero, E.; Morari, F.; Putti, M.; Teatini, P.
2014-12-01
In a world of increasing food insecurity the development of modeling tools capable of supporting on-farm decision making processes is highly needed to formulate sustainable irrigation practices in order to preserve water resources while maintaining adequate crop yield. The design of these practices starts from the accurate modeling of soil-plant-atmosphere interaction. We present an innovative 3D Soil-Plant model that couples 3D hydrological soil dynamics with a mechanistic description of plant transpiration and photosynthesis, including a crop growth module. Because of its intrinsically three dimensional nature, the model is able to capture spatial and temporal patterns of crop yield over large scales and under various climate and environmental factors. The model is applied to a 25 ha corn field in the Venice coastland, Italy, that has been continuously monitored over the years 2010 and 2012 in terms of both hydrological dynamics and yield mapping. The model results satisfactorily reproduce the large variability observed in maize yield (from 2 to 15 ton/ha). This variability is shown to be connected to the spatial heterogeneities of the farmland, which is characterized by several sandy paleo-channels crossing organic-rich silty soils. Salt contamination of soils and groundwater in a large portion of the area strongly affects the crop yield, especially outside the paleo-channels, where measured salt concentrations are lower than the surroundings. The developed model includes a simplified description of the effects of salt concentration in soil water on transpiration. The results seem to capture accurately the effects of salt concentration and the variability of the climatic conditions occurred during the three years of measurements. This innovative modeling framework paves the way to future large scale simulations of farmland dynamics.
Tu Zhenhan; Cao Hongzhe
2009-01-01
This article gives a normal criterion for families of holomorphic mappings of several complex variables into PN (C) for moving hypersurfaces in pointwise general position, related to an Eremenko's theorem.
Design of Model Following Variable Structure Controller for Three-axle Vehicle
管西强; 张建武; 屈求真
2003-01-01
An optimal control procedure is developed for the front and rear wheels of a three-axle vehicle moving on a complex typical road based on model following variable structure control strategy. The actual vehicle may be considered as an uncertain system. Cornering stiffness of front and rear wheels and external disturbances are varied in a limited range. The model-following variable structure control method is used to control both front and rear wheels steering operations of the vehicle, so that steering responses of the vehicle follow from those of the reference model. By numerical results obtained from computer simulation, it is demonstrated that the control system model can cope with the effects of parameter perturbations and outside disturbances.
Ellis, Jules L
2014-04-01
It is shown that a unidimensional monotone latent variable model for binary items implies a restriction on the relative sizes of item correlations: The negative logarithm of the correlations satisfies the triangle inequality. This inequality is not implied by the condition that the correlations are nonnegative, the criterion that coefficient H exceeds 0.30, or manifest monotonicity. The inequality implies both a lower bound and an upper bound for each correlation between two items, based on the correlations of those two items with every possible third item. It is discussed how this can be used in Mokken's (A theory and procedure of scale-analysis, Mouton, The Hague, 1971) scale analysis.
Intraseasonal Variability in an Aquaplanet General Circulation Model
Adam H Sobel
2010-04-01
Full Text Available An aquaplanet atmospheric general circulation model simulation with a robust intraseasonal oscillation is analyzed. The SST boundary condition resembles the observed December-April average with continents omitted, although with the meridional SST gradient reduced to be one-quarter of that observed poleward of 10 ̊ latitude. Slow, regular eastward propagation at 5 m s21 in winds and precipitation with amplitude greater than that in the observed MJO is clearly identified in unfiltered fields. Local precipitation rate is a strongly non-linear and increasing function of column precipitable water, as in observations. The model intraseasonal oscillation resembles a moisture mode that is destabilized by wind-evaporation feedback, and that propagates eastward through advection of anomalous humidity by the sum of perturbation winds and mean westerly flow. A series of sensitivity experiments are conducted to test hypothesized mechanisms. A mechanism denial experiment in which intraseasonal latent heat flux variability is removed largely eliminates intraseasonal wind and precipitation variability. Reducing the lower-troposphere westerly flow in the warm pool by reducing the zonal SST gradient slows eastward propagation, supporting the importance of horizontal advection by the low-level wind to eastward propagation. A zonally symmetric SST basic state produces weak and unrealistic intraseasonal variability between 30 and 90 day timescales, indicating the importance of mean low-level westerly winds and hence a realistic phase relationship between precipitation and surface flux anomalies for producing realistic tropical intraseasonal variability.
Classification criteria of syndromes by latent variable models
Petersen, Janne
2010-01-01
The thesis has two parts; one clinical part: studying the dimensions of human immunodeficiency virus associated lipodystrophy syndrome (HALS) by latent class models, and a more statistical part: investigating how to predict scores of latent variables so these can be used in subsequent regression...... analyses. Part 1: HALS engages different phenotypic changes of peripheral lipoatrophy and central lipohypertrophy. There are several different definitions of HALS and no consensus on the number of phenotypes. Many of the definitions consist of counting fulfilled criteria on markers and do not include...... patient's characteristics. These methods may erroneously reduce multiplicity either by combining markers of different phenotypes or by mixing HALS with other processes such as aging. Latent class models identify homogenous groups of patients based on sets of variables, for example symptoms. As no gold...
Explicit estimating equations for semiparametric generalized linear latent variable models
Ma, Yanyuan
2010-07-05
We study generalized linear latent variable models without requiring a distributional assumption of the latent variables. Using a geometric approach, we derive consistent semiparametric estimators. We demonstrate that these models have a property which is similar to that of a sufficient complete statistic, which enables us to simplify the estimating procedure and explicitly to formulate the semiparametric estimating equations. We further show that the explicit estimators have the usual root n consistency and asymptotic normality. We explain the computational implementation of our method and illustrate the numerical performance of the estimators in finite sample situations via extensive simulation studies. The advantage of our estimators over the existing likelihood approach is also shown via numerical comparison. We employ the method to analyse a real data example from economics. © 2010 Royal Statistical Society.
Warren, Dan L; Seifert, Stephanie N
2011-03-01
Maxent, one of the most commonly used methods for inferring species distributions and environmental tolerances from occurrence data, allows users to fit models of arbitrary complexity. Model complexity is typically constrained via a process known as L1 regularization, but at present little guidance is available for setting the appropriate level of regularization, and the effects of inappropriately complex or simple models are largely unknown. In this study, we demonstrate the use of information criterion approaches to setting regularization in Maxent, and we compare models selected using information criteria to models selected using other criteria that are common in the literature. We evaluate model performance using occurrence data generated from a known "true" initial Maxent model, using several different metrics for model quality and transferability. We demonstrate that models that are inappropriately complex or inappropriately simple show reduced ability to infer habitat quality, reduced ability to infer the relative importance of variables in constraining species' distributions, and reduced transferability to other time periods. We also demonstrate that information criteria may offer significant advantages over the methods commonly used in the literature.
Family system dynamics and type 1 diabetic glycemic variability: a vector-auto-regressive model.
Günther, Moritz Philipp; Winker, Peter; Böttcher, Claudia; Brosig, Burkhard
2013-06-01
Statistical approaches rooted in econometric methodology, so far foreign to the psychiatric and psychological realms have provided exciting and substantial new insights into complex mind-body interactions over time and individuals. Over 120 days, this structured diary study explored the mutual interactions of emotions within a classic 3-person family system with its Type 1 diabetic adolescent's daily blood glucose variability. Glycemic variability was measured through daily standard deviations of blood glucose determinations (at least 3 per day). Emotions were captured individually utilizing the self-assessment manikin on affective valence (negative-positive), activation (calm-excited), and control (dominated-dominant). Auto- and cross-correlating the stationary absolute (level) values of the mutually interacting parallel time series data sets through vector autoregression (VAR, grounded in econometric theory) allowed for the formulation of 2 concordant models. Applying Cholesky Impulse Response Analysis at a 95% confidence interval, we provided evidence for an adolescent being happy, calm, and in control to exhibit less glycemic variability and hence diabetic derailment. A nondominating mother and a happy father seemed to also reduce glycemic variability. Random shocks increasing glycemic variability affected only the adolescent and her father: In 1 model, the male parent felt in charge; in the other, he calmed down while his daughter turned sad. All reactions to external shocks lasted for less than 4 full days. Extant literature on affect and glycemic variability in Type 1 diabetic adolescents as well as challenges arising from introducing econometric theory to the field were discussed.
Complex Modeling - SAHG | LSDB Archive [Life Science Database Archive metadata
Full Text Available List Contact us SAHG Complex Modeling Data detail Data name Complex Modeling DOI 10.18908/lsdba.nbdc01193-00...3 Description of data contents Protein-protein copmlex modeling predition Data file File name: sahg_complex....zip File URL: ftp://ftp.biosciencedbc.jp/archive/sahg/LATEST/sahg_complex.zip File size: 147 KB Simple searc...h URL http://togodb.biosciencedbc.jp/togodb/view/sahg_complex#en Data acquisition... method If a target sequence was related to a given subunit of a template complex in PQS database with >=80%
Predictive modeling and reducing cyclic variability in autoignition engines
Hellstrom, Erik; Stefanopoulou, Anna; Jiang, Li; Larimore, Jacob
2016-08-30
Methods and systems are provided for controlling a vehicle engine to reduce cycle-to-cycle combustion variation. A predictive model is applied to predict cycle-to-cycle combustion behavior of an engine based on observed engine performance variables. Conditions are identified, based on the predicted cycle-to-cycle combustion behavior, that indicate high cycle-to-cycle combustion variation. Corrective measures are then applied to prevent the predicted high cycle-to-cycle combustion variation.
Random spatial processes and geostatistical models for soil variables
Lark, R. M.
2009-04-01
Geostatistical models of soil variation have been used to considerable effect to facilitate efficient and powerful prediction of soil properties at unsampled sites or over partially sampled regions. Geostatistical models can also be used to investigate the scaling behaviour of soil process models, to design sampling strategies and to account for spatial dependence in the random effects of linear mixed models for spatial variables. However, most geostatistical models (variograms) are selected for reasons of mathematical convenience (in particular, to ensure positive definiteness of the corresponding variables). They assume some underlying spatial mathematical operator which may give a good description of observed variation of the soil, but which may not relate in any clear way to the processes that we know give rise to that observed variation in the real world. In this paper I shall argue that soil scientists should pay closer attention to the underlying operators in geostatistical models, with a view to identifying, where ever possible, operators that reflect our knowledge of processes in the soil. I shall illustrate how this can be done in the case of two problems. The first exemplar problem is the definition of operators to represent statistically processes in which the soil landscape is divided into discrete domains. This may occur at disparate scales from the landscape (outcrops, catchments, fields with different landuse) to the soil core (aggregates, rhizospheres). The operators that underly standard geostatistical models of soil variation typically describe continuous variation, and so do not offer any way to incorporate information on processes which occur in discrete domains. I shall present the Poisson Voronoi Tessellation as an alternative spatial operator, examine its corresponding variogram, and apply these to some real data. The second exemplar problem arises from different operators that are equifinal with respect to the variograms of the
Marwaha, Puneeta; Sunkaria, Ramesh Kumar
2016-09-01
The sample entropy (SampEn) has been widely used to quantify the complexity of RR-interval time series. It is a fact that higher complexity, and hence, entropy is associated with the RR-interval time series of healthy subjects. But, SampEn suffers from the disadvantage that it assigns higher entropy to the randomized surrogate time series as well as to certain pathological time series, which is a misleading observation. This wrong estimation of the complexity of a time series may be due to the fact that the existing SampEn technique updates the threshold value as a function of long-term standard deviation (SD) of a time series. However, time series of certain pathologies exhibits substantial variability in beat-to-beat fluctuations. So the SD of the first order difference (short term SD) of the time series should be considered while updating threshold value, to account for period-to-period variations inherited in a time series. In the present work, improved sample entropy (I-SampEn), a new methodology has been proposed in which threshold value is updated by considering the period-to-period variations of a time series. The I-SampEn technique results in assigning higher entropy value to age-matched healthy subjects than patients suffering atrial fibrillation (AF) and diabetes mellitus (DM). Our results are in agreement with the theory of reduction in complexity of RR-interval time series in patients suffering from chronic cardiovascular and non-cardiovascular diseases.
Zapata-Fonseca, Leonardo; Dotov, Dobromir; Fossion, Ruben; Froese, Tom
2016-01-01
There is a growing consensus that a fuller understanding of social cognition depends on more systematic studies of real-time social interaction. Such studies require methods that can deal with the complex dynamics taking place at multiple interdependent temporal and spatial scales, spanning sub-personal, personal, and dyadic levels of analysis. We demonstrate the value of adopting an extended multi-scale approach by re-analyzing movement time-series generated in a study of embodied dyadic interaction in a minimal virtual reality environment (a perceptual crossing experiment). Reduced movement variability revealed an interdependence between social awareness and social coordination that cannot be accounted for by either subjective or objective factors alone: it picks out interactions in which subjective and objective conditions are convergent (i.e., elevated coordination is perceived as clearly social, and impaired coordination is perceived as socially ambiguous). This finding is consistent with the claim that interpersonal interaction can be partially constitutive of direct social perception. Clustering statistics (Allan Factor) of salient events revealed fractal scaling. Complexity matching defined as the similarity between these scaling laws was significantly more pronounced in pairs of participants as compared to surrogate dyads. This further highlights the multi-scale and distributed character of social interaction and extends previous complexity matching results from dyadic conversation to non-verbal social interaction dynamics. Trials with successful joint interaction were also associated with an increase in local coordination. Consequently, a local coordination pattern emerges on the background of complex dyadic interactions in the PCE task and makes joint successful performance possible. PMID:28018274
Leonardo Zapata-Fonseca
2016-12-01
Full Text Available There is a growing consensus that a fuller understanding of social cognition depends on more systematic studies of real-time social interaction. Such studies require methods that can deal with the complex dynamics taking place at multiple interdependent temporal and spatial scales, spanning sub-personal, personal, and dyadic levels of analysis. We demonstrate the value of adopting an extended multi-scale approach by re-analyzing movement time series generated in a study of embodied dyadic interaction in a minimal virtual reality environment (a perceptual crossing experiment. Reduced movement variability revealed an interdependence between social awareness and social coordination that cannot be accounted for by either subjective or objective factors alone: it picks out interactions in which subjective and objective conditions are convergent (i.e. elevated coordination is perceived as clearly social, and impaired coordination is perceived as socially ambiguous. This finding is consistent with the claim that interpersonal interaction can be partially constitutive of direct social perception. Clustering statistics (Allan Factor of salient events revealed fractal scaling. Complexity matching defined as the similarity between these scaling laws was significantly more pronounced in pairs of participants as compared to surrogate dyads. This further highlights the multi-scale and distributed character of social interaction and extends previous complexity matching results from dyadic conversation to nonverbal social interaction dynamics. Trials with successful joint interaction were also associated with an increase in local coordination. Consequently, a local coordination pattern emerges on the background of complex dyadic interactions in the PCE task and makes joint successful performance possible.
Connolly, Joseph W.; Friedlander, David; Kopasakis, George
2015-01-01
This paper covers the development of an integrated nonlinear dynamic simulation for a variable cycle turbofan engine and nozzle that can be integrated with an overall vehicle Aero-Propulso-Servo-Elastic (APSE) model. A previously developed variable cycle turbofan engine model is used for this study and is enhanced here to include variable guide vanes allowing for operation across the supersonic flight regime. The primary focus of this study is to improve the fidelity of the model's thrust response by replacing the simple choked flow equation convergent-divergent nozzle model with a MacCormack method based quasi-1D model. The dynamic response of the nozzle model using the MacCormack method is verified by comparing it against a model of the nozzle using the conservation element/solution element method. A methodology is also presented for the integration of the MacCormack nozzle model with the variable cycle engine.
A MAD model for gamma-ray burst variability
Lloyd-Ronning, Nicole M.; Dolence, Joshua C.; Fryer, Christopher L.
2016-09-01
We present a model for the temporal variability of long gamma-ray bursts (GRBs) during the prompt phase (the highly variable first 100 s or so), in the context of a magnetically arrested disc (MAD) around a black hole. In this state, sufficient magnetic flux is held on to the black hole such that it stalls the accretion near the inner region of the disc. The system transitions in and out of the MAD state, which we relate to the variable luminosity of the GRB during the prompt phase, with a characteristic time-scale defined by the free-fall time in the region over which the accretion is arrested. We present simple analytic estimates of the relevant energetics and time-scales, and compare them to GRB observations. In particular, we show how this model can reproduce the characteristic one second time-scale that emerges from various analyses of the prompt emission light curve. We also discuss how our model can accommodate the potentially physically important correlation between a burst quiescent time and the duration of its subsequent pulse.
A MAD Model for Gamma-Ray Burst Variability
,
2016-01-01
We present a model for the temporal variability of long gamma-ray bursts during the prompt phase (the highly variable first 100 seconds or so), in the context of a magnetically arrested disk (MAD) around a black hole. In this state, sufficient magnetic flux is held on to the black hole such that it stalls the accretion near the inner region of the disk. The system transitions in and out of the MAD state, which we relate to the variable luminosity of the GRB during the prompt phase, with a characteristic timescale defined by the free fall time in the region over which the accretion is arrested. We present simple analytic estimates of the relevant energetics and timescales, and compare them to gamma-ray burst observations. In particular, we show how this model can reproduce the characteristic one second time scale that emerges from various analyses of the prompt emission light curve. We also discuss how our model can accommodate the potentially physically important correlation between a burst quiescent time and...
Attributing Sources of Variability in Regional Climate Model Experiments
Kaufman, C. G.; Sain, S. R.
2008-12-01
Variability in regional climate model (RCM) projections may be due to a number of factors, including the choice of RCM itself, the boundary conditions provided by a driving general circulation model (GCM), and the choice of emission scenario. We describe a new statistical methodology, Gaussian Process ANOVA, which allows us to decompose these sources of variability while also taking account of correlations in the output across space. Our hierarchical Bayesian framework easily allows joint inference about high probability envelopes for the functions, as well as decompositions of total variance that vary over the domain of the functions. These may be used to create maps illustrating the magnitude of each source of variability across the domain of the regional model. We use this method to analyze temperature and precipitation data from the Prudence Project, an RCM intercomparison project in which RCMs were crossed with GCM forcings and scenarios in a designed experiment. This work was funded by the North American Regional Climate Change Assessment Program (NARCCAP).
Liang, Liyin L.; Riveros-Iregui, Diego A.; Risk, David A.
2016-09-01
Biogeochemical processes driving the spatial variability of soil CO2 production and flux are well studied, but little is known about the variability in the spatial distribution of the stable carbon isotopes that make up soil CO2, particularly in complex terrain. Spatial differences in stable isotopes of soil CO2 could indicate fundamental differences in isotopic fractionation at the landscape level and may be useful to inform modeling of carbon cycling over large areas. We measured the spatial and seasonal variabilities of the δ13C of soil CO2 (δS) and the δ13C of soil CO2 flux (δP) in a subalpine forest ecosystem located in the Rocky Mountains of Montana. We found consistently more isotopically depleted values of δS and δP in low and wet areas of the landscape relative to steep and dry areas. Our results suggest that the spatial patterns of δS and δP are strongly mediated by soil water and soil respiration rate. More interestingly, our analysis revealed different temporal trends in δP across the landscape; in high landscape positions δP became more positive, whereas in low landscape positions δP became more negative with time. These trends might be the result of differential dynamics in the seasonality of soil moisture and its effects on soil CO2 production and flux. Our results suggest concomitant yet independent effects of water on physical (soil gas diffusivity) and biological (photosynthetic discrimination) processes that mediate δS and δP and are important when evaluating the δ13C of CO2 exchanged between soils and the atmosphere in complex terrain.
Modeling competitive substitution in a polyelectrolyte complex
Peng, B.; Muthukumar, M., E-mail: muthu@polysci.umass.edu [Department of Polymer Science and Engineering, University of Massachusetts Amherst, Amherst, Massachusetts 01003 (United States)
2015-12-28
We have simulated the invasion of a polyelectrolyte complex made of a polycation chain and a polyanion chain, by another longer polyanion chain, using the coarse-grained united atom model for the chains and the Langevin dynamics methodology. Our simulations reveal many intricate details of the substitution reaction in terms of conformational changes of the chains and competition between the invading chain and the chain being displaced for the common complementary chain. We show that the invading chain is required to be sufficiently longer than the chain being displaced for effecting the substitution. Yet, having the invading chain to be longer than a certain threshold value does not reduce the substitution time much further. While most of the simulations were carried out in salt-free conditions, we show that presence of salt facilitates the substitution reaction and reduces the substitution time. Analysis of our data shows that the dominant driving force for the substitution process involving polyelectrolytes lies in the release of counterions during the substitution.
Complex networks repair strategies: Dynamic models
Fu, Chaoqi; Wang, Ying; Gao, Yangjun; Wang, Xiaoyang
2017-09-01
Network repair strategies are tactical methods that restore the efficiency of damaged networks; however, unreasonable repair strategies not only waste resources, they are also ineffective for network recovery. Most extant research on network repair focuses on static networks, but results and findings on static networks cannot be applied to evolutionary dynamic networks because, in dynamic models, complex network repair has completely different characteristics. For instance, repaired nodes face more severe challenges, and require strategic repair methods in order to have a significant effect. In this study, we propose the Shell Repair Strategy (SRS) to minimize the risk of secondary node failures due to the cascading effect. Our proposed method includes the identification of a set of vital nodes that have a significant impact on network repair and defense. Our identification of these vital nodes reduces the number of switching nodes that face the risk of secondary failures during the dynamic repair process. This is positively correlated with the size of the average degree and enhances network invulnerability.
Modeling KIC10684673 and KIC12216817 as Single Pulsating Variables
Turner, Garrison
2016-01-01
The raw light curves of both KIC 10684673 and KIC 12216817 show variability. Both are listed in the Kepler Eclipsing Binary Catalog (hereafter KEBC), however both are flagged as uncertain in nature. In the present study we show their light curves can be modeled by considering each target as a single, multi-modal delta Scuti pulsator. While this does not exclude the possibility of eclipsing systems, we argue, while spectroscopy on the systems is still lacking, the delta Scuti model is a simpler explanation and therefore more probable.
Viscous Dark Energy Models with Variable G and Λ
Arbab I. Arbab
2008-01-01
We consider a cosmological model with bulk viscosity η and variable cosmological Λ∝ρ-α, alpha =const and gravitational G constants. The model exhibits many interesting cosmological features. Inflation proceeds du to the presence of bulk viscosity and dark energy without requiring the equation of state p = -ρ. During the inflationary era the energy density ρ does not remain constant, as in the de-Sitter type. Moreover, the cosmological and gravitational constants increase exponentially with time, whereas the energy density and viscosity decrease exponentially with time. The rate of mass creation during inflation is found to be very huge suggesting that all matter in the universe is created during inflation.
Comparison of multiaxial fatigue damage models under variable amplitude loading
Chen, Hong; Shang, De Guang; Tian, Yu Jie [Beijing Univ. of Technology, Beijing (China); Liu, Jian Zhong [Beijing Institute of Aeronautical Materials, Beijing (China)
2012-11-15
Based on the cycle counting method of Wang and Brown and on the linear accumulation damage rule of Miner, four multiaxial fatigue damage models without any weight factors proposed by Pan et al., Varvani Farahani, Shang and Wang, and Shang et al. are used to compute fatigue damage. The procedure is evaluated using the low cycle fatigue experimental data of 7050 T7451 aluminum alloy and En15R steel under tension/torsion variable amplitude loading. The results reveal that the procedure is convenient for engineering design and application, and that the four multiaxial fatigue damage models provide good life estimates.
Modeling Surgery: A New Way Toward Understanding Earth Climate Variability
WU Lixin; LIU Zhengyu; Robert Gallimore; Michael Notaro; Robert Jacob
2005-01-01
A new modeling concept, referred to as Modeling Surgery, has been recently developed at University of Wisconsin-Madison. It is specifically designed to diagnose coupled feedbacks between different climate components as well as climatic teleconnections within a specific component through systematically modifying the coupling configurations and teleconnective pathways. It thus provides a powerful means for identifying the causes and mechanisms of low-frequency variability in the Earth's climate system. In this paper, we will give a short review of our recent progress in this new area.
A model for Faraday pilot-waves over variable topography
Faria, Luiz
2016-11-01
In 2005 Yves Couder and co-workers discovered that droplets walking on a vibrating bath posses certain features previously thought to be exclusive to quantum systems. These millimetric droplets synchronize with their Faraday wavefield, creating a macroscopic pilot-wave system. In this talk we exploit the fact that the waves generated are nearly monochromatic and propose a hydrodynamic model capable of capturing the interaction between bouncing drops and a variable topography. We show that our model is able to reproduce some important experiments involving the drop-topography interaction, such as non-specular reflection and single-slit diffraction.
A model for Faraday pilot waves over variable topography
Faria, Luiz M.
2017-01-01
Couder and Fort discovered that droplets walking on a vibrating bath possess certain features previously thought to be exclusive to quantum systems. These millimetric droplets synchronize with their Faraday wavefield, creating a macroscopic pilot-wave system. In this paper we exploit the fact that the waves generated are nearly monochromatic and propose a hydrodynamic model capable of quantitatively capturing the interaction between bouncing drops and a variable topography. We show that our reduced model is able to reproduce some important experiments involving the drop-topography interaction, such as non-specular reflection and single-slit diffraction.
The Key Variables for the Development of a Care Model for Stroke
Stavrianopoulos T.
2011-10-01
Full Text Available Introduction Stroke is a major cause of death, threatened and reduced health, and a patient’s dependence on support after the acute phase. The increase in knowledge of neurological recovery after a stroke has led to new treatment strategies, where the importance of the physical environment and rehabilitation is on par with the importance of the medical treatment. It is crucial that the whole stroke team is involved in assessing, planning, and evaluating the care provided. Aim The presentation of the variables that are needed for the development of a general model of care for stroke. Material and Methods Method was used is to search electronic databases (MEDLINE, CINAHL for a review of international literature to 2009 and became selection of books, articles and studies from libraries. The search was done the December of 2010. Results The key variables to develop a model of care are: the care planning, the team culture, the care culture, the professional knowledge, the quality of space, the observation and assessment, the patient participation and the inter-professional teamwork. Conclusions The model presents stroke care as a complex system, with many feedback relationships between key variables for care. The development of the model, with the contributions of existing literature, enables further tests in practice and improvements in stroke care and further refinement of variables which include the model of care.
Sensitivity Analysis of the ALMANAC Model's Input Variables
XIE Yun; James R.Kiniry; Jimmy R.Williams; CHEN You-min; LIN Er-da
2002-01-01
Crop models often require extensive input data sets to realistically simulate crop growth. Development of such input data sets can be difficult for some model users. The objective of this study was to evaluate the importance of variables in input data sets for crop modeling. Based on published hybrid performance trials in eight Texas counties, we developed standard data sets of 10-year simulations of maize and sorghum for these eight counties with the ALMANAC (Agricultural Land Management Alternatives with Numerical Assessment Criteria) model. The simulation results were close to the measured county yields with relative error only 2.6%for maize, and - 0.6% for sorghum. We then analyzed the sensitivity of grain yield to solar radiation, rainfall, soil depth, soil plant available water, and runoff curve number, comparing simulated yields to those with the original, standard data sets. Runoff curve number changes had the greatest impact on simulated maize and sorghum yields for all the counties. The next most critical input was rainfall, and then solar radiation for both maize and sorghum, especially for the dryland condition. For irrigated sorghum, solar radiation was the second most critical input instead of rainfall. The degree of sensitivity of yield to all variables for maize was larger than for sorghum except for solar radiation. Many models use a USDA curve number approach to represent soil water redistribution, so it will be important to have accurate curve numbers, rainfall, and soil depth to realistically simulate yields.
Thiosemicarbazone complexes of the platinum metals. A story of variable coordination modes
Indrani Pal; Falguni Basuli; Samaresh Bhattacharya
2002-08-01
Salicylaldehyde thiosemicarbazone (H2saltsc) reacts with [M(PPh3)3X2] (M = Ru, Os; X = Cl, Br) to afford complexes of type [M(PPh3)2(Hsaltsc)2], in which the salicylaldehyde thiosemicarbazone ligand is coordinated to the metal as a bidentate N,S-donor forming a four-membered chelate ring. Reaction of benzaldehyde thiosemicarbazones (Hbztsc-R) with [M(PPh3)3X2] also affords complexes of similar type, viz. [M(PPh3)2(bztsc-R)2], in which the benzaldehyde thiosemicarbazones have also been found to coordinate the metal as a bidentate N,S-donor forming a fourmembered chelate ring as before. Reaction of the Hbztsc-R ligands has also been carried out with [M(bpy)2X2] (M = Ru, Os; X = Cl, Br), which has afforded complexes of type [M(bpy)2(bztsc-R)]+, which have been isolated as perchlorate salts. Coordination mode of bztsc-R has been found to be the same as before. Structure of the Hbztsc-OMe ligand has been determined and some molecular modelling studies have been carried out determine the reason for the observed mode of coordination. Reaction of acetone thiosemicarbazone (Hactsc) has then been carried out with [M(bpy)2X2] to afford the [M(bpy)2(actsc)]ClO4 complexes, in which the actsc ligand coordinates the metal as a bidentate N,S-donor forming a five-membered chelate ring. Reaction of H2saltsc has been carried out with [Ru(bpy)2Cl2] to prepare the [Ru(bpy)2(Hsaltsc)]ClO4 complex, which has then been reacted with one equivalent of nickel perchlorate to afford an octanuclear complex of type [{Ru(bpy)2(saltsc-H)}4Ni4](ClO4)4.
无
2008-01-01
The variation of the exon 2 of the major histo-compatibility complex (MHC) class Ⅱ gene DRB locus in three feline species were examined on clouded leopard (Neofelis nebulosa), leopard (Panthera pardus) and Amur tiger (Panthera tigris altaica). A pair of degenerated primers was used to amplify DRB locus covering almost the whole exon 2. Exon 2 encodes the β1 domain which is the most vari-able fragments of the MHC class Ⅱ molecule. Single-strand conformational polymorphism (SSCP) analysis was applied to detect different MHC class Ⅱ DRB haplotypes. Fifteen recombinant plasmids for each individual were screened out, isolated, purified and sequenced finally. Totally eight distinct haplotypes of exon 2 were obtained in four individuals. With-in 237 bp nucleotide sequences from four samples, 30 vari-able positions were found, and 21 putative peptide-binding positions were disclosed in 79 amino acid residues. The ratio of nonsynonymous substitutions (dN) was much higher than that of synonymous substitutions (dS), which indicated that balancing selection probably maintain the variation ofexon 2. MEGA neighbor joining (N J) and PAUP maximum parsimo-ny (MP) methods were used to reconstruct phylogenetic trees among species, respectively. Results displayed a more close relationship between leopard and tiger; however, clouded leopard has a comparatively distant relationship form the other two.
Ahmad, Syed Mudasir; Bhat, Farooz Ahmad; Balkhi, Masood-Ul Hassan; Bhat, Bilal Ahmad
2014-12-01
Despite numerous studies on the taxonomy of a highly complex group of schizothoracine (snow trouts), with over five recognized species from Kashmir, India (Schizothorax niger, Schizothorax esocinus, Schizothorax plagiostomus, Schizothorax curvifrons and Schizothorax labiatus) based on traditional morphological data, the relationships between these species is poorly understood and the taxonomic validity is still under debate. To resolve the evolutionary relationships among these species, we sequenced mitochondrial fragments, including 16Sr RNA, Cytb and the D-loop. Separate analyses of 16S and Cytb showed intermixing of the species and 16S was found more conserved than Cytb. The D-loop was found highly variable and showed length variation between and within species. Length variation was observed in di-nucleotide (TA)n microsatellite repeats with a variable number of repeat units (n = 7-14) that did not show heteroplasmy. Central conserved sequence blocks (CSBs) in D-loop sequences were found comparable to other vertebrate species. All phylogenetic reconstructions recovered the focal taxa as a monophyletic clade within the schizothoracines. Analyses with combined mitochondrial data sets showed close genetic relationships of all the five species. In addition to a close relationship between S. niger and S. curvifrons, two distinct groupings of S. ecoscinus and S. plagiostomus were supported by all the analyses. This study gives an insight into molecular phylogeny of the species and improves our understanding of historical and taxonomic relationships derived from morphological and ecological studies.
Time-Variable Complex Metal Absorption Lines in the Quasar HS1603+3820
Misawa, T; Charlton, J C; Tajitsu, A; Misawa, Toru; Eracleous, Michael; Charlton, Jane C.; Tajitsu, Akito
2005-01-01
We present a new spectrum of the quasar HS1603+3820 taken 1.28 years (0.36 years in the quasar rest frame) after a previous observation with Subaru+HDS. The new spectrum enables us to search for time variability as an identifier of intrinsic narrow absorption lines (NALs). This quasar shows a rich complex of C IV NALs within 60,000 km/s of the emission redshift. Based on covering factor analysis, Misawa et al. found that the C IV NAL system at z_abs= 2.42--2.45 (System A, at a shift velocity of v_sh = 8,300--10,600 km/s relative to the quasar) was intrinsic to the quasar. With our new spectrum, we perform time variability analysis as well as covering factor analysis to separate intrinsic NALs from intervening NALs for 8 C IV systems. Only System A, which was identified as an intrinsic system in the earlier paper by Misawa et al., shows a strong variation in line strength (W_obs ~ 10.4A -> 19.1A). We speculate that a broad absorption line (BAL) could be forming in this quasar. We illustrate the plausibility of...
Disambiguating Seesaw Models using Invariant Mass Variables at Hadron Colliders
Dev, P S Bhupal; Mohapatra, Rabindra N
2015-01-01
We propose ways to distinguish between different mechanisms behind the collider signals of TeV-scale seesaw models for neutrino masses using kinematic endpoints of invariant mass variables. We particularly focus on two classes of such models widely discussed in literature: (i) Standard Model extended by the addition of singlet neutrinos and (ii) Left-Right Symmetric Models. Relevant scenarios involving the same "smoking-gun" collider signature of dilepton plus dijet with no missing transverse energy differ from one another by their event topology, resulting in distinctive relationships among the kinematic endpoints to be used for discerning them at hadron colliders. These kinematic endpoints are readily translated to the mass parameters of the on-shell particles through simple analytic expressions which can be used for measuring the masses of the new particles. A Monte Carlo simulation with detector effects is conducted to test the viability of the proposed strategy in a realistic environment. Finally, we dis...
Modelling of W UMa-type variable stars
P. L. Skelton
2010-01-01
Full Text Available W Ursae Majoris (W UMa-type variable stars are over-contact eclipsing binary stars. To understand how these systems form and evolve requires observations spanning many years, followed by detailed models of as many of them as possible. The All Sky Automated Survey (ASAS has an extensive database of these stars. Using the ASAS V band photometric data, models of W UMatype stars are being created to determine the parameters of these stars. This paper discusses the classification of eclipsing binary stars, the methods used to model them as well as the results of the modelling of ASAS 120036–3915.6, an over-contact eclipsing binary star that appears to be changing its period.
Variable structure control of nonlinear systems through simplified uncertain models
Sira-Ramirez, Hebertt
1986-01-01
A variable structure control approach is presented for the robust stabilization of feedback equivalent nonlinear systems whose proposed model lies in the same structural orbit of a linear system in Brunovsky's canonical form. An attempt to linearize exactly the nonlinear plant on the basis of the feedback control law derived for the available model results in a nonlinearly perturbed canonical system for the expanded class of possible equivalent control functions. Conservatism tends to grow as modeling errors become larger. In order to preserve the internal controllability structure of the plant, it is proposed that model simplification be carried out on the open-loop-transformed system. As an example, a controller is developed for a single link manipulator with an elastic joint.
Genuer, Robin; Toussile, Wilson
2011-01-01
Malaria control strategies aiming at reducing disease transmission intensity may impact both oocyst intensity and infection prevalence in the mosquito vector. Thus far, mathematical models failed to identify a clear relationship between Plasmodium falciparum gametocytes and their infectiousness to mosquitoes. Natural isolates of gametocytes are genetically diverse and biologically complex. Infectiousness to mosquitoes relies on multiple parameters such as density, sex-ratio, maturity, parasite genotypes and host immune factors. In this article, we investigated how density and genetic diversity of gametocytes impact on the success of transmission in the mosquito vector. We analyzed data for which the number of covariates plus attendant interactions is at least of order of the sample size, precluding usage of classical models such as general linear models. We then considered the variable importance from random forests to address the problem of selecting the most influent variables. The selected covariates were ...
Barnett, Tony; Fournié, Guillaume; Gupta, Sunetra; Seeley, Janet
2015-01-01
Incorporation of 'social' variables into epidemiological models remains a challenge. Too much detail and models cease to be useful; too little and the very notion of infection - a highly social process in human populations - may be considered with little reference to the social. The French sociologist Émile Durkheim proposed that the scientific study of society required identification and study of 'social currents'. Such 'currents' are what we might today describe as 'emergent properties', specifiable variables appertaining to individuals and groups, which represent the perspectives of social actors as they experience the environment in which they live their lives. Here we review the ways in which one particular emergent property, hope, relevant to a range of epidemiological situations, might be used in epidemiological modelling of infectious diseases in human populations. We also indicate how such an approach might be extended to include a range of other potential emergent properties to represent complex social and economic processes bearing on infectious disease transmission.
Ghil, Michael; Thompson, Sylvester
2007-01-01
We consider a delay differential equation (DDE) model for El-Nino Southern Oscillation (ENSO) variability. The model combines two key mechanisms that participate in ENSO dynamics: delayed negative feedback and seasonal forcing. We perform stability analyses of the model in the three-dimensional space of its physically relevant parameters. Our results illustrate the role of these three parameters: strength of seasonal forcing $b$, atmosphere-ocean coupling $\\kappa$, and propagation period $\\tau$ of oceanic waves across the Tropical Pacific. Two regimes of variability, stable and unstable, are separated by a sharp neutral curve in the $(b,\\tau)$ plane at constant $\\kappa$. The detailed structure of the neutral curve becomes very irregular and possibly fractal, while individual trajectories within the unstable region become highly complex and possibly chaotic, as the atmosphere-ocean coupling $\\kappa$ increases. In the unstable regime, spontaneous transitions occur in the mean ``temperature'' ({\\it i.e.}, thermo...
Adaptation of endothelial cells to physiologically-modeled, variable shear stress.
Joseph S Uzarski
Full Text Available Endothelial cell (EC function is mediated by variable hemodynamic shear stress patterns at the vascular wall, where complex shear stress profiles directly correlate with blood flow conditions that vary temporally based on metabolic demand. The interactions of these more complex and variable shear fields with EC have not been represented in hemodynamic flow models. We hypothesized that EC exposed to pulsatile shear stress that changes in magnitude and duration, modeled directly from real-time physiological variations in heart rate, would elicit phenotypic changes as relevant to their critical roles in thrombosis, hemostasis, and inflammation. Here we designed a physiological flow (PF model based on short-term temporal changes in blood flow observed in vivo and compared it to static culture and steady flow (SF at a fixed pulse frequency of 1.3 Hz. Results show significant changes in gene regulation as a function of temporally variable flow, indicating a reduced wound phenotype more representative of quiescence. EC cultured under PF exhibited significantly higher endothelial nitric oxide synthase (eNOS activity (PF: 176.0±11.9 nmol/10(5 EC; SF: 115.0±12.5 nmol/10(5 EC, p = 0.002 and lower TNF-a-induced HL-60 leukocyte adhesion (PF: 37±6 HL-60 cells/mm(2; SF: 111±18 HL-60/mm(2, p = 0.003 than cells cultured under SF which is consistent with a more quiescent anti-inflammatory and anti-thrombotic phenotype. In vitro models have become increasingly adept at mimicking natural physiology and in doing so have clarified the importance of both chemical and physical cues that drive cell function. These data illustrate that the variability in metabolic demand and subsequent changes in perfusion resulting in constantly variable shear stress plays a key role in EC function that has not previously been described.
ORGANIZING SCENARIO VARIABLES BY APPLYING THE INTERPRETATIVE STRUCTURAL MODELING (ISM
Daniel Estima de Carvalho
2009-10-01
Full Text Available The scenario building method is a thought mode - taken to effect in an optimized, strategic manner - based on trends and uncertain events, concerning a large variety of potential results that may impact the future of an organization.In this study, the objective is to contribute towards a possible improvement in Godet and Schoemaker´s scenario preparation methods, by employing the Interpretative Structural Modeling (ISM as a tool for the analysis of variables.Given this is an exploratory theme, bibliographical research with tool definition and analysis, examples extraction from literature and a comparison exercise of referred methods, were undertaken.It was verified that ISM may substitute or complement the original tools for the analysis of variables of scenarios per Godet and Schoemaker’s methods, given the fact that it enables an in-depth analysis of relations between variables in a shorter period of time, facilitating both structuring and construction of possible scenarios.Key-words: Strategy. Future studies. Interpretative Structural Modeling.
Using Models to Inform Policy: Insights from Modeling the Complexities of Global Polio Eradication
Thompson, Kimberly M.
Drawing on over 20 years of experience modeling risks in complex systems, this talk will challenge SBP participants to develop models that provide timely and useful answers to critical policy questions when decision makers need them. The talk will include reflections on the opportunities and challenges associated with developing integrated models for complex problems and communicating their results effectively. Dr. Thompson will focus the talk largely on collaborative modeling related to global polio eradication and the application of system dynamics tools. After successful global eradication of wild polioviruses, live polioviruses will still present risks that could potentially lead to paralytic polio cases. This talk will present the insights of efforts to use integrated dynamic, probabilistic risk, decision, and economic models to address critical policy questions related to managing global polio risks. Using a dynamic disease transmission model combined with probabilistic model inputs that characterize uncertainty for a stratified world to account for variability, we find that global health leaders will face some difficult choices, but that they can take actions that will manage the risks effectively. The talk will emphasize the need for true collaboration between modelers and subject matter experts, and the importance of working with decision makers as partners to ensure the development of useful models that actually get used.
Rind, D.; Suozzo, R.; Balachandran, N. K.
1988-01-01
The variability which arises in the GISS Global Climate-Middle Atmosphere Model on two time scales is reviewed: interannual standard deviations, derived from the five-year control run, and intraseasonal variability as exemplified by statospheric warnings. The model's extratropical variability for both mean fields and eddy statistics appears reasonable when compared with observations, while the tropical wind variability near the stratopause may be excessive possibly, due to inertial oscillations. Both wave 1 and wave 2 warmings develop, with connections to tropospheric forcing. Variability on both time scales results from a complex set of interactions among planetary waves, the mean circulation, and gravity wave drag. Specific examples of these interactions are presented, which imply that variability in gravity wave forcing and drag may be an important component of the variability of the middle atmosphere.
GEOCHEMICAL MODELING OF F AREA SEEPAGE BASIN COMPOSITION AND VARIABILITY
Millings, M.; Denham, M.; Looney, B.
2012-05-08
From the 1950s through 1989, the F Area Seepage Basins at the Savannah River Site (SRS) received low level radioactive wastes resulting from processing nuclear materials. Discharges of process wastes to the F Area Seepage Basins followed by subsequent mixing processes within the basins and eventual infiltration into the subsurface resulted in contamination of the underlying vadose zone and downgradient groundwater. For simulating contaminant behavior and subsurface transport, a quantitative understanding of the interrelated discharge-mixing-infiltration system along with the resulting chemistry of fluids entering the subsurface is needed. An example of this need emerged as the F Area Seepage Basins was selected as a key case study demonstration site for the Advanced Simulation Capability for Environmental Management (ASCEM) Program. This modeling evaluation explored the importance of the wide variability in bulk wastewater chemistry as it propagated through the basins. The results are intended to generally improve and refine the conceptualization of infiltration of chemical wastes from seepage basins receiving variable waste streams and to specifically support the ASCEM case study model for the F Area Seepage Basins. Specific goals of this work included: (1) develop a technically-based 'charge-balanced' nominal source term chemistry for water infiltrating into the subsurface during basin operations, (2) estimate the nature of short term and long term variability in infiltrating water to support scenario development for uncertainty quantification (i.e., UQ analysis), (3) identify key geochemical factors that control overall basin water chemistry and the projected variability/stability, and (4) link wastewater chemistry to the subsurface based on monitoring well data. Results from this study provide data and understanding that can be used in further modeling efforts of the F Area groundwater plume. As identified in this study, key geochemical factors
Environmental vs. demographic variability in stochastic lattice predator-prey models
Tauber, Uwe C.
2014-03-01
In contrast to the neutral population cycles of the deterministic mean-field Lotka-Volterra rate equations, including spatial structure and stochastic noise in models for predator-prey interactions yields complex spatio-temporal structures associated with long-lived erratic population oscillations. Environmental variability in the form of quenched spatial randomness in the predation rates results in more localized activity patches. Population fluctuations in rare favorable regions in turn cause a remarkable increase in the asymptotic densities of both predators and prey. Very intriguing features are found when variable interaction rates are affixed to individual particles rather than lattice sites. Stochastic dynamics with demographic variability in conjunction with inheritable predation efficiencies generate non-trivial time evolution for the predation rate distributions, yet with overall essentially neutral optimization.
Bayesian nonparametric centered random effects models with variable selection.
Yang, Mingan
2013-03-01
In a linear mixed effects model, it is common practice to assume that the random effects follow a parametric distribution such as a normal distribution with mean zero. However, in the case of variable selection, substantial violation of the normality assumption can potentially impact the subset selection and result in poor interpretation and even incorrect results. In nonparametric random effects models, the random effects generally have a nonzero mean, which causes an identifiability problem for the fixed effects that are paired with the random effects. In this article, we focus on a Bayesian method for variable selection. We characterize the subject-specific random effects nonparametrically with a Dirichlet process and resolve the bias simultaneously. In particular, we propose flexible modeling of the conditional distribution of the random effects with changes across the predictor space. The approach is implemented using a stochastic search Gibbs sampler to identify subsets of fixed effects and random effects to be included in the model. Simulations are provided to evaluate and compare the performance of our approach to the existing ones. We then apply the new approach to a real data example, cross-country and interlaboratory rodent uterotrophic bioassay.
Spatiotemporal Variability of Lake Water Quality in the Context of Remote Sensing Models
Carly Hyatt Hansen
2017-04-01
Full Text Available This study demonstrates a number of methods for using field sampling and observed lake characteristics and patterns to improve techniques for development of algae remote sensing models and applications. As satellite and airborne sensors improve and their data are more readily available, applications of models to estimate water quality via remote sensing are becoming more practical for local water quality monitoring, particularly of surface algal conditions. Despite the increasing number of applications, there are significant concerns associated with remote sensing model development and application, several of which are addressed in this study. These concerns include: (1 selecting sensors which are suitable for the spatial and temporal variability in the water body; (2 determining appropriate uses of near-coincident data in empirical model calibration; and (3 recognizing potential limitations of remote sensing measurements which are biased toward surface and near-surface conditions. We address these issues in three lakes in the Great Salt Lake surface water system (namely the Great Salt Lake, Farmington Bay, and Utah Lake through sampling at scales that are representative of commonly used sensors, repeated sampling, and sampling at both near-surface depths and throughout the water column. The variability across distances representative of the spatial resolutions of Landsat, SENTINEL-2 and MODIS sensors suggests that these sensors are appropriate for this lake system. We also use observed temporal variability in the system to evaluate sensors. These relationships proved to be complex, and observed temporal variability indicates the revisit time of Landsat may be problematic for detecting short events in some lakes, while it may be sufficient for other areas of the system with lower short-term variability. Temporal variability patterns in these lakes are also used to assess near-coincident data in empirical model development. Finally, relationships
Shared Variable Oriented Parallel Precompiler for SPMD Model
无
1995-01-01
For the moment,commercial parallel computer systems with distributed memory architecture are usually provided with parallel FORTRAN or parallel C compliers,which are just traditional sequential FORTRAN or C compilers expanded with communication statements.Programmers suffer from writing parallel programs with communication statements. The Shared Variable Oriented Parallel Precompiler (SVOPP) proposed in this paper can automatically generate appropriate communication statements based on shared variables for SPMD(Single Program Multiple Data) computation model and greatly ease the parallel programming with high communication efficiency.The core function of parallel C precompiler has been successfully verified on a transputer-based parallel computer.Its prominent performance shows that SVOPP is probably a break-through in parallel programming technique.
Crack simulation models in variable amplitude loading - a review
Luiz Carlos H. Ricardo
2016-02-01
Full Text Available This work presents a review of crack propagation simulation models considering plane stress and plane strain conditions. It is presented also a chronological different methodologies used to perform the crack advance by finite element method. Some procedures used to edit variable spectrum loading and the effects during crack propagation processes, like retardation, in the fatigue life of the structures are discussed. Based on this work there is no consensus in the scientific community to determine the best way to simulate crack propagation under variable spectrum loading due the combination of metallurgic and mechanical factors regarding, for example, how to select and edit the representative spectrum loading to be used in the crack propagation simulation.
Krautkramer, C.; Rend, R. R.
2014-12-01
Menstrual flow, which is a result of shedding of uterus endometrium, occurs periodically in sync with a women's hormonal cycle. Management of this flow while allowing women to pursue their normal daily lives is the purpose of many commercial products. Some of these products, e.g. feminine hygiene pads and tampons, utilize porous materials in achieving their goal. In this paper we will demonstrate different phenomena that have been observed in flow of menstrual fluid through these porous materials, share some of the advances made in experimental and analytical study of these phenomena, and also present some of the unsolved challenges and difficulties encountered while studying this kind of flow. Menstrual fluid is generally composed of four main components: blood plasma, blood cells, cervical mucus, and tissue debris. This non-homogeneous, multiphase fluid displays very complex rheological behavior, e. g., yield stress, thixotropy, and visco-elasticity, that varies throughout and between menstrual cycles and among women due to various factors. Flow rates are also highly variable during menstruation and across the population and the rheological properties of the fluid change during the flow into and through the product. In addition to these phenomena, changes to the structure of the porous medium within the product can also be seen due to fouling and/or swelling of the material. This paper will, also, share how the fluid components impact the flow and the consequences for computer simulation, the creation of a simulant fluid and testing methods, and for designing products that best meet consumer needs. We hope to bring to light the challenges of managing this complex flow to meet a basic need of women all over the world. An opportunity exists to apply learnings from research in other disciplines to improve the scientific knowledge related to the flow of this complex fluid through the porous medium that is a sanitary product.
Heart rate recovery after exercise: relations to heart rate variability and complexity.
Javorka, M; Zila, I; Balhárek, T; Javorka, K
2002-08-01
Physical exercise is associated with parasympathetic withdrawal and increased sympathetic activity resulting in heart rate increase. The rate of post-exercise cardiodeceleration is used as an index of cardiac vagal reactivation. Analysis of heart rate variability (HRV) and complexity can provide useful information about autonomic control of the cardiovascular system. The aim of the present study was to ascertain the association between heart rate decrease after exercise and HRV parameters. Heart rate was monitored in 17 healthy male subjects (mean age: 20 years) during the pre-exercise phase (25 min supine, 5 min standing), during exercise (8 min of the step test with an ascending frequency corresponding to 70% of individual maximal power output) and during the recovery phase (30 min supine). HRV analysis in the time and frequency domains and evaluation of a newly developed complexity measure - sample entropy - were performed on selected segments of heart rate time series. During recovery, heart rate decreased gradually but did not attain pre-exercise values within 30 min after exercise. On the other hand, HRV gradually increased, but did not regain rest values during the study period. Heart rate complexity was slightly reduced after exercise and attained rest values after 30-min recovery. The rate of cardiodeceleration did not correlate with pre-exercise HRV parameters, but positively correlated with HRV measures and sample entropy obtained from the early phases of recovery. In conclusion, the cardiodeceleration rate is independent of HRV measures during the rest period but it is related to early post-exercise recovery HRV measures, confirming a parasympathetic contribution to this phase.
Heart rate recovery after exercise: relations to heart rate variability and complexity
M. Javorka
2002-08-01
Full Text Available Physical exercise is associated with parasympathetic withdrawal and increased sympathetic activity resulting in heart rate increase. The rate of post-exercise cardiodeceleration is used as an index of cardiac vagal reactivation. Analysis of heart rate variability (HRV and complexity can provide useful information about autonomic control of the cardiovascular system. The aim of the present study was to ascertain the association between heart rate decrease after exercise and HRV parameters. Heart rate was monitored in 17 healthy male subjects (mean age: 20 years during the pre-exercise phase (25 min supine, 5 min standing, during exercise (8 min of the step test with an ascending frequency corresponding to 70% of individual maximal power output and during the recovery phase (30 min supine. HRV analysis in the time and frequency domains and evaluation of a newly developed complexity measure - sample entropy - were performed on selected segments of heart rate time series. During recovery, heart rate decreased gradually but did not attain pre-exercise values within 30 min after exercise. On the other hand, HRV gradually increased, but did not regain rest values during the study period. Heart rate complexity was slightly reduced after exercise and attained rest values after 30-min recovery. The rate of cardiodeceleration did not correlate with pre-exercise HRV parameters, but positively correlated with HRV measures and sample entropy obtained from the early phases of recovery. In conclusion, the cardiodeceleration rate is independent of HRV measures during the rest period but it is related to early post-exercise recovery HRV measures, confirming a parasympathetic contribution to this phase.
Modeling SEPs and Their Variability in the Inner Heliosphere
Mays, M. L.; Luhmann, J. G.; Odstrcil, D.; Schwadron, N.; Gorby, M.; Bain, H. M.; Mewaldt, R. A.; Gold, R. E.
2015-12-01
In preparation for Solar Probe Plus and Solar Orbiter we consider a series of SEP modeling experiments based on the global MHD WSA-ENLIL model. The models include the Solar Energetic Particle Model (SEPMOD) (Luhmann et al., 2007; 2010) and the Earth-Moon-Mars Radiation Environment Module (EMMREM) (Schwadron et al., 2010)). WSA-ENLIL provides a time-dependent background heliospheric description including CME-like clouds which can generate shocks during their propagation. SEPMOD makes use of the ENLIL-provided magnetic topologies of observer-connected magnetic field lines and all plasma and shock properties along those field lines. The model injects protons onto a sequence observer field lines at intensities dependent on the connected shock source strength which are then integrated at the observer to approximate the proton flux. EMMREM couples with MHD models such as ENLIL and computes energetic particle distributions based on the focused transport equation along a Lagrangian grid of nodes that propagate out with the solar wind. In this presentation we compare SEP modeling results with data, and consider SEP variability in longitude and latitude. Additionally we study the relative importance of observer-connectivity to the solar source and shock locations, as derived from ENLIL. We evaluate the shock geometry and compare model-derived shock parameters with those observed. Finally, we test the effect of the seed population on the resulting profiles.
Modeling Complex Chemical Systems: Problems and Solutions
van Dijk, Jan
2016-09-01
Non-equilibrium plasmas in complex gas mixtures are at the heart of numerous contemporary technologies. They typically contain dozens to hundreds of species, involved in hundreds to thousands of reactions. Chemists and physicists have always been interested in what are now called chemical reduction techniques (CRT's). The idea of such CRT's is that they reduce the number of species that need to be considered explicitly without compromising the validity of the model. This is usually achieved on the basis of an analysis of the reaction time scales of the system under study, which identifies species that are in partial equilibrium after a given time span. The first such CRT that has been widely used in plasma physics was developed in the 1960's and resulted in the concept of effective ionization and recombination rates. It was later generalized to systems in which multiple levels are effected by transport. In recent years there has been a renewed interest in tools for chemical reduction and reaction pathway analysis. An example of the latter is the PumpKin tool. Another trend is that techniques that have previously been developed in other fields of science are adapted as to be able to handle the plasma state of matter. Examples are the Intrinsic Low Dimension Manifold (ILDM) method and its derivatives, which originate from combustion engineering, and the general-purpose Principle Component Analysis (PCA) technique. In this contribution we will provide an overview of the most common reduction techniques, then critically assess the pros and cons of the methods that have gained most popularity in recent years. Examples will be provided for plasmas in argon and carbon dioxide.
Testing biomechanical models of human lumbar lordosis variability.
Castillo, Eric R; Hsu, Connie; Mair, Ross W; Lieberman, Daniel E
2017-05-01
Lumbar lordosis (LL) is a key adaptation for bipedalism, but factors underlying curvature variations remain unclear. This study tests three biomechanical models to explain LL variability. Thirty adults (15 male, 15 female) were scanned using magnetic resonance imaging (MRI), a standing posture analysis was conducted, and lumbar range of motion (ROM) was assessed. Three measures of LL were compared. The trunk's center of mass was estimated from external markers to calculate hip moments (Mhip ) and lumbar flexion moments. Cross-sectional areas of lumbar vertebral bodies and trunk muscles were measured from scans. Regression models tested associations between LL and the Mhip moment arm, a beam bending model, and an interaction between relative trunk strength (RTS) and ROM. Hip moments were not associated with LL. Beam bending was moderately predictive of standing but not supine LL (R(2) = 0.25). Stronger backs and increased ROM were associated with greater LL, especially when standing (R(2) = 0.65). The strength-flexibility model demonstrates the differential influence of RTS depending on ROM: individuals with high ROM exhibited the most LL variation with RTS, while those with low ROM showed reduced LL regardless of RTS. Hip moments appear constrained suggesting the possibility of selection, and the beam model explains some LL variability due to variations in trunk geometry. The strength-flexibility interaction best predicted LL, suggesting a tradeoff in which ROM limits the effects of back strength on LL. The strength-flexibility model may have clinical relevance for spinal alignment and pathology. This model may also suggest that straight-backed Neanderthals had reduced lumbar mobility. © 2017 Wiley Periodicals, Inc.
Influence of climate model variability on projected Arctic shipping futures
Stephenson, Scott R.; Smith, Laurence C.
2015-11-01
Though climate models exhibit broadly similar agreement on key long-term trends, they have significant temporal and spatial differences due to intermodel variability. Such variability should be considered when using climate models to project the future marine Arctic. Here we present multiple scenarios of 21st-century Arctic marine access as driven by sea ice output from 10 CMIP5 models known to represent well the historical trend and climatology of Arctic sea ice. Optimal vessel transits from North America and Europe to the Bering Strait are estimated for two periods representing early-century (2011-2035) and mid-century (2036-2060) conditions under two forcing scenarios (RCP 4.5/8.5), assuming Polar Class 6 and open-water vessels with medium and no ice-breaking capability, respectively. Results illustrate that projected shipping viability of the Northern Sea Route (NSR) and Northwest Passage (NWP) depends critically on model choice. The eastern Arctic will remain the most reliably accessible marine space for trans-Arctic shipping by mid-century, while outcomes for the NWP are particularly model-dependent. Omitting three models (GFDL-CM3, MIROC-ESM-CHEM, and MPI-ESM-MR), our results would indicate minimal NWP potential even for routes from North America. Furthermore, the relative importance of the NSR will diminish over time as the number of viable central Arctic routes increases gradually toward mid-century. Compared to vessel class, climate forcing plays a minor role. These findings reveal the importance of model choice in devising projections for strategic planning by governments, environmental agencies, and the global maritime industry.
Niche variability and its consequences for species distribution modeling.
Matt J Michel
Full Text Available When species distribution models (SDMs are used to predict how a species will respond to environmental change, an important assumption is that the environmental niche of the species is conserved over evolutionary time-scales. Empirical studies conducted at ecological time-scales, however, demonstrate that the niche of some species can vary in response to environmental change. We use habitat and locality data of five species of stream fishes collected across seasons to examine the effects of niche variability on the accuracy of projections from Maxent, a popular SDM. We then compare these predictions to those from an alternate method of creating SDM projections in which a transformation of the environmental data to similar scales is applied. The niche of each species varied to some degree in response to seasonal variation in environmental variables, with most species shifting habitat use in response to changes in canopy cover or flow rate. SDMs constructed from the original environmental data accurately predicted the occurrences of one species across all seasons and a subset of seasons for two other species. A similar result was found for SDMs constructed from the transformed environmental data. However, the transformed SDMs produced better models in ten of the 14 total SDMs, as judged by ratios of mean probability values at known presences to mean probability values at all other locations. Niche variability should be an important consideration when using SDMs to predict future distributions of species because of its prevalence among natural populations. The framework we present here may potentially improve these predictions by accounting for such variability.
A simple model for 1/f spectra in heart rate variability
Gleeson, James P.; Stefanovska, Aneta
2007-06-01
Heart rate variability (HRV) measures cycle-to-cycle correlations in the instantaneous oscillation period of the heart. In this paper it is shown that a simple model process, consisting of a sum of uncoupled sinusoidal oscillators with slightly different frequencies, has a HRV spectrum with a 1/f scaling over a range of frequencies. This implies that the appearance of 1/f HRV spectra in experiments should not be considered evidence of oscillator coupling or other more complex dynamics. The origin of the 1/f scaling in the model is examined analytically, and its dependence upon the sampling of low-amplitude fluctuations of the process is highlighted.
Clinical complexity in medicine: A measurement model of task and patient complexity
Islam, R.; Weir, C.; Fiol, G. Del
2016-01-01
Summary Background Complexity in medicine needs to be reduced to simple components in a way that is comprehensible to researchers and clinicians. Few studies in the current literature propose a measurement model that addresses both task and patient complexity in medicine. Objective The objective of this paper is to develop an integrated approach to understand and measure clinical complexity by incorporating both task and patient complexity components focusing on infectious disease domain. The measurement model was adapted and modified to healthcare domain. Methods Three clinical Infectious Disease teams were observed, audio-recorded and transcribed. Each team included an Infectious Diseases expert, one Infectious Diseases fellow, one physician assistant and one pharmacy resident fellow. The transcripts were parsed and the authors independently coded complexity attributes. This baseline measurement model of clinical complexity was modified in an initial set of coding process and further validated in a consensus-based iterative process that included several meetings and email discussions by three clinical experts from diverse backgrounds from the Department of Biomedical Informatics at the University of Utah. Inter-rater reliability was calculated using Cohen’s kappa. Results The proposed clinical complexity model consists of two separate components. The first is a clinical task complexity model with 13 clinical complexity-contributing factors and 7 dimensions. The second is the patient complexity model with 11 complexity-contributing factors and 5 dimensions. Conclusion The measurement model for complexity encompassing both task and patient complexity will be a valuable resource for future researchers and industry to measure and understand complexity in healthcare. PMID:26404626
How uncertainty in socio-economic variables affects large-scale transport model forecasts
Manzo, Stefano; Nielsen, Otto Anker; Prato, Carlo Giacomo
2015-01-01
A strategic task assigned to large-scale transport models is to forecast the demand for transport over long periods of time to assess transport projects. However, by modelling complex systems transport models have an inherent uncertainty which increases over time. As a consequence, the longer...... time, especially with respect to large-scale transport models. The study described in this paper contributes to fill the gap by investigating the effects of uncertainty in socio-economic variables growth rate projections on large-scale transport model forecasts, using the Danish National Transport...... the period forecasted the less reliable is the forecasted model output. Describing uncertainty propagation patterns over time is therefore important in order to provide complete information to the decision makers. Among the existing literature only few studies analyze uncertainty propagation patterns over...
Multi-Variable Model-Based Parameter Estimation Model for Antenna Radiation Pattern Prediction
Deshpande, Manohar D.; Cravey, Robin L.
2002-01-01
A new procedure is presented to develop multi-variable model-based parameter estimation (MBPE) model to predict far field intensity of antenna. By performing MBPE model development procedure on a single variable at a time, the present method requires solution of smaller size matrices. The utility of the present method is demonstrated by determining far field intensity due to a dipole antenna over a frequency range of 100-1000 MHz and elevation angle range of 0-90 degrees.
Barrera-Ramirez, Juliana; Bravi, Andrea; Green, Geoffrey; Seely, Andrew J; Kenny, Glen P
2013-11-01
To better understand the alterations in cardiorespiratory variability during exercise, the present study characterized the patterns of change in heart rate variability (HRV), respiratory rate variability (RRV), and combined cardiorespiratory variability (HRV-RRV) during an intermittent incremental submaximal exercise model. Six males and six females completed a submaximal exercise protocol consisting of an initial baseline resting period followed by three 10-min bouts of exercise at 20%, 40%, and 60% of maximal aerobic capacity (V̇O2max). The R-R interval and interbreath interval variability were measured at baseline rest and throughout the submaximal exercise. A group of 93 HRV, 83 RRV, and 28 HRV-RRV measures of variability were tracked over time through a windowed analysis using a 5-min window size and 30-s window step. A total of 91 HRV measures were able to detect the presence of exercise, whereas only 46 RRV and 3 HRV-RRV measures were able to detect the same stimulus. Moreover, there was a loss of overall HRV and RRV, loss of complexity of HRV and RRV, and loss of parasympathetic modulation of HRV (up to 40% V̇O2max) with exercise. Conflicting changes in scale-invariant structure of HRV and RRV with increases in exercise intensity were also observed. In summary, in this simultaneous evaluation of HRV and RRV, we found more consistent changes across HRV metrics compared with RRV and HRV-RRV.
Variability modes in core flows inverted from geomagnetic field models
Pais, Maria A; Schaeffer, Nathanaël
2014-01-01
We use flows that we invert from two geomagnetic field models spanning centennial time periods (gufm1 and COV-OBS), and apply Principal Component Analysis and Singular Value Decomposition of coupled fields to extract the main modes characterizing their spatial and temporal variations. The quasi geostrophic flows inverted from both geomagnetic field models show similar features. However, COV-OBS has a less energetic mean flow and larger time variability. The statistical significance of flow components is tested from analyses performed on subareas of the whole domain. Bootstrapping methods are also used to extract robust flow features required by both gufm1 and COV-OBS. Three main empirical circulation modes emerge, simultaneously constrained by both geomagnetic field models and expected to be robust against the particular a priori used to build them. Mode 1 exhibits three large robust vortices at medium/high latitudes, with opposite circulation under the Atlantic and the Pacific hemispheres. Mode 2 interesting...
Estimation in the polynomial errors-in-variables model
ZHANG; Sanguo
2002-01-01
［1］Kendall, M. G., Stuart, A., The Advanced Theory of Statistics, Vol. 2, New York: Charles Griffin, 1979.［2］Fuller, W. A., Measurement Error Models, New York: Wiley, 1987.［3］Carroll, R. J., Ruppert D., Stefanski, L. A., Measurement Error in Nonlinear Models, London: Chapman & Hall, 1995.［4］Stout, W. F., Almost Sure Convergence, New York: Academic Press, 1974,154.［5］Petrov, V. V., Sums of Independent Random Variables, New York: Springer-Verlag, 1975, 272.［6］Zhang, S. G., Chen, X. R., Consistency of modified MLE in EV model with replicated observation, Science in China, Ser. A, 2001, 44(3): 304-310.［7］Lai, T. L., Robbins, H., Wei, C. Z., Strong consistency of least squares estimates in multiple regression, J. Multivariate Anal., 1979, 9: 343-362.
Initial CGE Model Results Summary Exogenous and Endogenous Variables Tests
Edwards, Brian Keith [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Boero, Riccardo [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Rivera, Michael Kelly [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2017-08-07
The following discussion presents initial results of tests of the most recent version of the National Infrastructure Simulation and Analysis Center Dynamic Computable General Equilibrium (CGE) model developed by Los Alamos National Laboratory (LANL). The intent of this is to test and assess the model’s behavioral properties. The test evaluated whether the predicted impacts are reasonable from a qualitative perspective. This issue is whether the predicted change, be it an increase or decrease in other model variables, is consistent with prior economic intuition and expectations about the predicted change. One of the purposes of this effort is to determine whether model changes are needed in order to improve its behavior qualitatively and quantitatively.
Hyeon Woo LEE
2011-01-01
AN APPLICATION OF LATENT VARIABL AN APPLICATION OF LATENT VARIABLE STRUCTURAL EQUATION MODELING FOR EXPERIMENTAL RESEARCH IN EDUCATIONAL TECHNOLOGY As the technology-enriched learning environments...
The Eemian climate simulated by two models of different complexities
Nikolova, Irina; Yin, Qiuzhen; Berger, Andre; Singh, Umesh; Karami, Pasha
2013-04-01
The Eemian period, also known as MIS-5, experienced warmer than today climate, reduction in ice sheets and important sea-level rise. These interesting features have made the Eemian appropriate to evaluate climate models when forced with astronomical and greenhouse gas forcings different from today. In this work, we present the simulated Eemian climate by two climate models of different complexities, LOVECLIM (LLN Earth system model of intermediate complexity) and CCSM3 (NCAR atmosphere-ocean general circulation model). Feedbacks from sea ice, vegetation, monsoon and ENSO phenomena are discussed to explain the regional similarities/dissimilarities in both models with respect to the pre-industrial (PI) climate. Significant warming (cooling) over almost all the continents during boreal summer (winter) leads to a largely increased (reduced) seasonal contrast in the northern (southern) hemisphere, mainly due to the much higher (lower) insolation received by the whole Earth in boreal summer (winter). The arctic is warmer than at PI through the whole year, resulting from its much higher summer insolation and its remnant effect in the following fall-winter through the interactions between atmosphere, ocean and sea ice. Regional discrepancies exist in the sea-ice formation zones between the two models. Excessive sea-ice formation in CCSM3 results in intense regional cooling. In both models intensified African monsoon and vegetation feedback are responsible for the cooling during summer in North Africa and on the Arabian Peninsula. Over India precipitation maximum is found further west, while in Africa the precipitation maximum migrates further north. Trees and grassland expand north in Sahel/Sahara, trees being more abundant in the results from LOVECLIM than from CCSM3. A mix of forest and grassland occupies continents and expand deep in the high northern latitudes in line with proxy records. Desert areas reduce significantly in Northern Hemisphere, but increase in North
Modeling intraindividual variability with repeated measures data methods and applications
Hershberger, Scott L
2013-01-01
This book examines how individuals behave across time and to what degree that behavior changes, fluctuates, or remains stable.It features the most current methods on modeling repeated measures data as reported by a distinguished group of experts in the field. The goal is to make the latest techniques used to assess intraindividual variability accessible to a wide range of researchers. Each chapter is written in a ""user-friendly"" style such that even the ""novice"" data analyst can easily apply the techniques.Each chapter features:a minimum discussion of mathematical detail;an empirical examp
Estimation and variable selection for generalized additive partial linear models
Wang, Li
2011-08-01
We study generalized additive partial linear models, proposing the use of polynomial spline smoothing for estimation of nonparametric functions, and deriving quasi-likelihood based estimators for the linear parameters. We establish asymptotic normality for the estimators of the parametric components. The procedure avoids solving large systems of equations as in kernel-based procedures and thus results in gains in computational simplicity. We further develop a class of variable selection procedures for the linear parameters by employing a nonconcave penalized quasi-likelihood, which is shown to have an asymptotic oracle property. Monte Carlo simulations and an empirical example are presented for illustration. © Institute of Mathematical Statistics, 2011.
Grassmann Variables and the Jaynes-Cummings Model
Dalton, Bryan J; Jeffers, John; Barnett, Stephen M
2012-01-01
This paper shows that phase space methods using a positive P type distribution function involving both c-number variables (for the cavity mode) and Grassmann variables (for the two level atom) can be used to treat the Jaynes-Cummings model. Although it is a Grassmann function, the distribution function is equivalent to six c-number functions of the two bosonic variables. Experimental quantities are given as bosonic phase space integrals involving the six functions. A Fokker-Planck equation involving both left and right Grassmann differentiation can be obtained for the distribution function, and is equivalent to six coupled equations for the six c-number functions. The approach used involves choosing the canonical form of the (non-unique) positive P distribution function, where the correspondence rules for bosonic operators are non-standard and hence the Fokker-Planck equation is also unusual. Initial conditions, such as for initially uncorrelated states, are used to determine the initial distribution function...
Rheological modelling of physiological variables during temperature variations at rest
Vogelaere, P.; de Meyer, F.
1990-06-01
The evolution with time of cardio-respiratory variables, blood pressure and body temperature has been studied on six males, resting in semi-nude conditions during short (30 min) cold stress exposure (0°C) and during passive recovery (60 min) at 20°C. Passive cold exposure does not induce a change in HR but increases VO 2, VCO 2 Ve and core temperature T re, whereas peripheral temperature is significantly lowered. The kinetic evolution of the studied variables was investigated using a Kelvin-Voigt rheological model. The results suggest that the human body, and by extension the measured physiological variables of its functioning, does not react as a perfect viscoelastic system. Cold exposure induces a more rapid adaptation for heart rate, blood pressure and skin temperatures than that observed during the rewarming period (20°C), whereas respiratory adjustments show an opposite evolution. During the cooling period of the experiment the adaptative mechanisms, taking effect to preserve core homeothermy and to obtain a higher oxygen supply, increase the energy loss of the body.
Analytic Thermoelectric Couple Modeling: Variable Material Properties and Transient Operation
Mackey, Jonathan A.; Sehirlioglu, Alp; Dynys, Fred
2015-01-01
To gain a deeper understanding of the operation of a thermoelectric couple a set of analytic solutions have been derived for a variable material property couple and a transient couple. Using an analytic approach, as opposed to commonly used numerical techniques, results in a set of useful design guidelines. These guidelines can serve as useful starting conditions for further numerical studies, or can serve as design rules for lab built couples. The analytic modeling considers two cases and accounts for 1) material properties which vary with temperature and 2) transient operation of a couple. The variable material property case was handled by means of an asymptotic expansion, which allows for insight into the influence of temperature dependence on different material properties. The variable property work demonstrated the important fact that materials with identical average Figure of Merits can lead to different conversion efficiencies due to temperature dependence of the properties. The transient couple was investigated through a Greens function approach; several transient boundary conditions were investigated. The transient work introduces several new design considerations which are not captured by the classic steady state analysis. The work helps to assist in designing couples for optimal performance, and also helps assist in material selection.
Variable Star Signature Classification using Slotted Symbolic Markov Modeling
Johnston, K. B.; Peter, A. M.
2017-01-01
With the advent of digital astronomy, new benefits and new challenges have been presented to the modern day astronomer. No longer can the astronomer rely on manual processing, instead the profession as a whole has begun to adopt more advanced computational means. This paper focuses on the construction and application of a novel time-domain signature extraction methodology and the development of a supporting supervised pattern classification algorithm for the identification of variable stars. A methodology for the reduction of stellar variable observations (time-domain data) into a novel feature space representation is introduced. The methodology presented will be referred to as Slotted Symbolic Markov Modeling (SSMM) and has a number of advantages which will be demonstrated to be beneficial; specifically to the supervised classification of stellar variables. It will be shown that the methodology outperformed a baseline standard methodology on a standardized set of stellar light curve data. The performance on a set of data derived from the LINEAR dataset will also be shown.
Quantifying uncertainty, variability and likelihood for ordinary differential equation models
Weisse, Andrea Y
2010-10-28
Abstract Background In many applications, ordinary differential equation (ODE) models are subject to uncertainty or variability in initial conditions and parameters. Both, uncertainty and variability can be quantified in terms of a probability density function on the state and parameter space. Results The partial differential equation that describes the evolution of this probability density function has a form that is particularly amenable to application of the well-known method of characteristics. The value of the density at some point in time is directly accessible by the solution of the original ODE extended by a single extra dimension (for the value of the density). This leads to simple methods for studying uncertainty, variability and likelihood, with significant advantages over more traditional Monte Carlo and related approaches especially when studying regions with low probability. Conclusions While such approaches based on the method of characteristics are common practice in other disciplines, their advantages for the study of biological systems have so far remained unrecognized. Several examples illustrate performance and accuracy of the approach and its limitations.
Mixed Effects Models for Complex Data
Wu, Lang
2009-01-01
Presenting effective approaches to address missing data, measurement errors, censoring, and outliers in longitudinal data, this book covers linear, nonlinear, generalized linear, nonparametric, and semiparametric mixed effects models. It links each mixed effects model with the corresponding class of regression model for cross-sectional data and discusses computational strategies for likelihood estimations of mixed effects models. The author briefly describes generalized estimating equations methods and Bayesian mixed effects models and explains how to implement standard models using R and S-Pl
Methodology Aspects of Quantifying Stochastic Climate Variability with Dynamic Models
Nuterman, Roman; Jochum, Markus; Solgaard, Anna
2015-04-01
The paleoclimatic records show that climate has changed dramatically through time. For the past few millions of years it has been oscillating between ice ages, with large parts of the continents covered with ice, and warm interglacial periods like the present one. It is commonly assumed that these glacial cycles are related to changes in insolation due to periodic changes in Earth's orbit around Sun (Milankovitch theory). However, this relationship is far from understood. The insolation changes are so small that enhancing feedbacks must be at play. It might even be that the external perturbation only plays a minor role in comparison to internal stochastic variations or internal oscillations. This claim is based on several shortcomings in the Milankovitch theory: Prior to one million years ago, the duration of the glacial cycles was indeed 41,000 years, in line with the obliquity cycle of Earth's orbit. This duration changed at the so-called Mid-Pleistocene transition to approximately 100,000 years. Moreover, according to Milankovitch's theory the interglacial of 400,000 years ago should not have happened. Thus, while prior to one million years ago the pacing of these glacial cycles may be tied to changes in Earth's orbit, we do not understand the current magnitude and phasing of the glacial cycles. In principle it is possible that the glacial/interglacial cycles are not due to variations in Earth's orbit, but due to stochastic forcing or internal modes of variability. We present a new method and preliminary results for a unified framework using a fully coupled Earth System Model (ESM), in which the leading three ice age hypotheses will be investigated together. Was the waxing and waning of ice sheets due to an internal mode of variability, due to variations in Earth's orbit, or simply due to a low-order auto-regressive process (i.e., noise integrated by system with memory)? The central idea is to use the Generalized Linear Models (GLM), which can handle both
Kumar, Jitendra; Collier, Nathan; Bisht, Gautam; Mills, Richard T.; Thornton, Peter E.; Iversen, Colleen M.; Romanovsky, Vladimir
2016-09-01
Vast carbon stocks stored in permafrost soils of Arctic tundra are under risk of release to the atmosphere under warming climate scenarios. Ice-wedge polygons in the low-gradient polygonal tundra create a complex mosaic of microtopographic features. This microtopography plays a critical role in regulating the fine-scale variability in thermal and hydrological regimes in the polygonal tundra landscape underlain by continuous permafrost. Modeling of thermal regimes of this sensitive ecosystem is essential for understanding the landscape behavior under the current as well as changing climate. We present here an end-to-end effort for high-resolution numerical modeling of thermal hydrology at real-world field sites, utilizing the best available data to characterize and parameterize the models. We develop approaches to model the thermal hydrology of polygonal tundra and apply them at four study sites near Barrow, Alaska, spanning across low to transitional to high-centered polygons, representing a broad polygonal tundra landscape. A multiphase subsurface thermal hydrology model (PFLOTRAN) was developed and applied to study the thermal regimes at four sites. Using a high-resolution lidar digital elevation model (DEM), microtopographic features of the landscape were characterized and represented in the high-resolution model mesh. The best available soil data from field observations and literature were utilized to represent the complex heterogeneous subsurface in the numerical model. Simulation results demonstrate the ability of the developed modeling approach to capture - without recourse to model calibration - several aspects of the complex thermal regimes across the sites, and provide insights into the critical role of polygonal tundra microtopography in regulating the thermal dynamics of the carbon-rich permafrost soils. Areas of significant disagreement between model results and observations highlight the importance of field-based observations of soil thermal and
A Measure of Learning Model Complexity by VC Dimension
WANG Wen-jian; ZHANG Li-xia; XU Zong-ben
2002-01-01
When developing models there is always a trade-off between model complexity and model fit. In this paper, a measure of learning model complexity based on VC dimension is presented, and some relevant mathematical theory surrounding the derivation and use of this metric is summarized. The measure allows modelers to control the amount of error that is returned from a modeling system and to state upper bounds on the amount of error that the modeling system will return on all future, as yet unseen and uncollected data sets. It is possible for modelers to use the VC theory to determine which type of model more accurately represents a system.
Ozone Concentration Prediction via Spatiotemporal Autoregressive Model With Exogenous Variables
Kamoun, W.; Senoussi, R.
2009-04-01
Forecast of environmental variables are nowadays of main concern for public health or agricultural management. In this context a large literature is devoted to spatio-temporal modelling of these variables using different statistical approaches. However, most of studies ignored the potential contribution of local (e.g. meteorological and/or geographical) covariables as well as the dynamical characteristics of observations. In this study, we present a spatiotemporal short term forecasting model for ozone concentration based on regularly observed covariables in predefined geographical sites. Our driving system simply combines a multidimensional second order autoregressive structured process with a linear regression model over influent exogenous factors and reads as follows: 2 q j Z (t) = A (Î&,cedil;D )Ã- [ αiZ(t- i)]+ B (Î&,cedil;D )Ã- [ βjX (t)]+ É(t) i=1 j=1 Z(t)=(Z1(t),â¦,Zn(t)) represents the vector of ozone concentration at time t of the n geographical sites, whereas Xj(t)=(X1j(t),â¦,Xnj(t)) denotes the jth exogenous variable observed over these sites. The nxn matrix functions A and B account for the spatial relationships between sites through the inter site distance matrix D and a vector parameter Î&.cedil; Multidimensional white noise É is assumed to be Gaussian and spatially correlated but temporally independent. A covariance structure of Z that takes account of noise spatial dependences is deduced under a stationary hypothesis and then included in the likelihood function. Statistical model and estimation procedure: Contrarily to the widely used choice of a {0,1}-valued neighbour matrix A, we put forward two more natural choices of exponential or power decay. Moreover, the model revealed enough stable to readily accommodate the crude observations without the usual tedious and somewhat arbitrarily variable transformations. Data set and preliminary analysis: In our case, ozone variable represents here the daily maximum ozone
Multi-perspective modelling of complex phenomena
Seck, M.D.; Honig, H.J.
2012-01-01
This conceptual paper discusses the limitations of a single-perspective hierarchical approach to modelling and proposes multi-perspective modelling as a way to overcome them. As it turns out, multi-perspective modelling is primarily a new methodology, using existing modelling techniques but
Variable thickness transient ground-water flow model. Volume 3. Program listings
Reisenauer, A.E.
1979-12-01
The Assessment of Effectiveness of Geologic Isolation Systems (AEGIS) Program is developing and applying the methodology for assessing the far-field, long-term post-closure safety of deep geologic nuclear waste repositories. AEGIS is being performed by Pacific Northwest Laboratory (PNL) under contract with the Office of Nuclear Waste Isolation (OWNI) for the Department of Energy (DOE). One task within AEGIS is the development of methodology for analysis of the consequences (water pathway) from loss of repository containment as defined by various release scenarios. Analysis of the long-term, far-field consequences of release scenarios requires the application of numerical codes which simulate the hydrologic systems, model the transport of released radionuclides through the hydrologic systems to the biosphere, and, where applicable, assess the radiological dose to humans. Hydrologic and transport models are available at several levels of complexity or sophistication. Model selection and use are determined by the quantity and quality of input data. Model development under AEGIS and related programs provides three levels of hydrologic models, two levels of transport models, and one level of dose models (with several separate models). This is the third of 3 volumes of the description of the VTT (Variable Thickness Transient) Groundwater Hydrologic Model - second level (intermediate complexity) two-dimensional saturated groundwater flow.
Separation of variables for integrable spin-boson models
Amico, Luigi; Osterloh, Andreas; Wirth, Tobias
2010-01-01
We formulate the functional Bethe ansatz for bosonic (infinite dimensional) representations of the Yang-Baxter algebra. The main deviation from the standard approach consists in a half infinite 'Sklyanin lattice' made of the eigenvalues of the operator zeros of the Bethe annihilation operator. By a separation of variables, functional TQ equations are obtained for this half infinite lattice. They provide valuable information about the spectrum of a given Hamiltonian model. We apply this procedure to integrable spin-boson models subject to both twisted and open boundary conditions. In the case of general twisted and certain open boundary conditions polynomial solutions to these TQ equations are found and we compute the spectrum of both the full transfer matrix and its quasi-classical limit. For generic open boundaries we present a two-parameter family of Bethe equations, derived from TQ equations that are compatible with polynomial solutions for Q. A connection of these parameters to the boundary fields is stil...
Viscous Dark Energy Models with Variable G and A
Arbab, Arbab I.
2008-10-01
We consider a cosmological model with bulk viscosity η and variable cosmological A α ρ -α, alpha = const and gravitational G constants. The model exhibits many interesting cosmological features. Inflation proceeds du to the presence of bulk viscosity and dark energy without requiring the equation of state p = —ρ. During the inflationary era the energy density ρ does not remain constant, as in the de-Sitter type. Moreover, the cosmological and gravitational constants increase exponentially with time, whereas the energy density and viscosity decrease exponentially with time. The rate of mass creation during inflation is found to be very huge suggesting that all matter in the universe is created during inflation.
Variable variance Preisach model for multilayers with perpendicular magnetic anisotropy
Franco, A. F.; Gonzalez-Fuentes, C.; Morales, R.; Ross, C. A.; Dumas, R.; Åkerman, J.; Garcia, C.
2016-08-01
We present a variable variance Preisach model that fully accounts for the different magnetization processes of a multilayer structure with perpendicular magnetic anisotropy by adjusting the evolution of the interaction variance as the magnetization changes. We successfully compare in a quantitative manner the results obtained with this model to experimental hysteresis loops of several [CoFeB/Pd ] n multilayers. The effect of the number of repetitions and the thicknesses of the CoFeB and Pd layers on the magnetization reversal of the multilayer structure is studied, and it is found that many of the observed phenomena can be attributed to an increase of the magnetostatic interactions and subsequent decrease of the size of the magnetic domains. Increasing the CoFeB thickness leads to the disappearance of the perpendicular anisotropy, and such a minimum thickness of the Pd layer is necessary to achieve an out-of-plane magnetization.
Wei, Wen-Hua; Bowes, John; Plant, Darren; Viatte, Sebastien; Yarwood, Annie; Massey, Jonathan; Worthington, Jane; Eyre, Stephen
2016-01-01
Genotypic variability based genome-wide association studies (vGWASs) can identify potentially interacting loci without prior knowledge of the interacting factors. We report a two-stage approach to make vGWAS applicable to diseases: firstly using a mixed model approach to partition dichotomous phenotypes into additive risk and non-additive environmental residuals on the liability scale and secondly using the Levene’s (Brown-Forsythe) test to assess equality of the residual variances across genotype groups per marker. We found widespread significant (P < 2.5e-05) vGWAS signals within the major histocompatibility complex (MHC) across all three study cohorts of rheumatoid arthritis. We further identified 10 epistatic interactions between the vGWAS signals independent of the MHC additive effects, each with a weak effect but jointly explained 1.9% of phenotypic variance. PTPN22 was also identified in the discovery cohort but replicated in only one independent cohort. Combining the three cohorts boosted power of vGWAS and additionally identified TYK2 and ANKRD55. Both PTPN22 and TYK2 had evidence of interactions reported elsewhere. We conclude that vGWAS can help discover interacting loci for complex diseases but require large samples to find additional signals. PMID:27109064
Constrained variability of modeled T:ET ratio across biomes
Fatichi, Simone; Pappas, Christoforos
2017-07-01
A large variability (35-90%) in the ratio of transpiration to total evapotranspiration (referred here as T:ET) across biomes or even at the global scale has been documented by a number of studies carried out with different methodologies. Previous empirical results also suggest that T:ET does not covary with mean precipitation and has a positive dependence on leaf area index (LAI). Here we use a mechanistic ecohydrological model, with a refined process-based description of evaporation from the soil surface, to investigate the variability of T:ET across biomes. Numerical results reveal a more constrained range and higher mean of T:ET (70 ± 9%, mean ± standard deviation) when compared to observation-based estimates. T:ET is confirmed to be independent from mean precipitation, while it is found to be correlated with LAI seasonally but uncorrelated across multiple sites. Larger LAI increases evaporation from interception but diminishes ground evaporation with the two effects largely compensating each other. These results offer mechanistic model-based evidence to the ongoing research about the patterns of T:ET and the factors influencing its magnitude across biomes.
Multivariate models of inter-subject anatomical variability.
Ashburner, John; Klöppel, Stefan
2011-05-15
This paper presents a very selective review of some of the approaches for multivariate modelling of inter-subject variability among brain images. It focusses on applying probabilistic kernel-based pattern recognition approaches to pre-processed anatomical MRI, with the aim of most accurately modelling the difference between populations of subjects. Some of the principles underlying the pattern recognition approaches of Gaussian process classification and regression are briefly described, although the reader is advised to look elsewhere for full implementational details. Kernel pattern recognition methods require matrices that encode the degree of similarity between the images of each pair of subjects. This review focusses on similarity measures derived from the relative shapes of the subjects' brains. Pre-processing is viewed as generative modelling of anatomical variability, and there is a special emphasis on the diffeomorphic image registration framework, which provides a very parsimonious representation of relative shapes. Although the review is largely methodological, excessive mathematical notation is avoided as far as possible, as the paper attempts to convey a more intuitive understanding of various concepts. The paper should be of interest to readers wishing to apply pattern recognition methods to MRI data, with the aim of clinical diagnosis or biomarker development. It also tries to explain that the best models are those that most accurately predict, so similar approaches should also be relevant to basic science. Knowledge of some basic linear algebra and probability theory should make the review easier to follow, although it may still have something to offer to those readers whose mathematics may be more limited. Copyright © 2010 Elsevier Inc. All rights reserved.
Probabilistic SDG model description and fault inference for large-scale complex systems
Yang Fan; Xiao Deyun
2006-01-01
Large-scale complex systems have the feature of including large amount of variables that have complex relationships, for which signed directed graph (SDG) model could serve as a significant tool by describing the causal relationships among variables. Although qualitative SDG expresses the causing effects between variables easily and clearly, it has many disadvantages or limitations. Probabilistic SDG proposed in the article describes deliver relationships among faults and variables by conditional probabilities, which contains more information and performs more applicability. The article introduces the concepts and construction approaches of probabilistic SDG, and presents the inference approaches aiming at fault diagnosis in this framework, i.e. Bayesian inference with graph elimination or junction tree algorithms to compute fault probabilities. Finally, the probabilistic SDG of a typical example of 65t/h boiler system is given.