WorldWideScience

Sample records for simple correlation-based model

  1. Long-range correlations in a simple stochastic model of coupled transport

    International Nuclear Information System (INIS)

    Larralde, Hernan; Sanders, David P

    2009-01-01

    We study coupled transport in the nonequilibrium stationary state of a model consisting of independent random walkers, moving along a one-dimensional channel, which carry a conserved energy-like quantity, with density and temperature gradients imposed by reservoirs at the ends of the channel. In our model, walkers interact with other walkers at the same site by sharing energy at each time step, but the amount of energy carried does not affect the motion of the walkers. We find that already in this simple model long-range correlations arise in the nonequilibrium stationary state which are similar to those observed in more realistic models of coupled transport. We derive an analytical expression for the source of these correlations, which we use to obtain semi-analytical results for the correlations themselves assuming a local-equilibrium hypothesis. These are in very good agreement with results from direct numerical simulations.

  2. Correlations in simple multi-string models of pp collisions at ISR energies

    International Nuclear Information System (INIS)

    Lugovoj, V.V.; Chudakov, V.M.

    1989-01-01

    Simple statistical simulation algorithms are suggested for formation and breaking of a few quark-gluon strings in inelastic pp collisions. Theoretical multiplicity distributions, semi-inclusive quasirapidity spectra, forward-backward correlations of charged secondaries and seagull effect agree well with the experimental data at ISR energies. In the framework of the model, the semi-inclusive two-particle correlations of quasirapidities depend upon the fraction of the spherical chains. The seagull effect is qualitatively interpretated

  3. Neutrosophic Correlation and Simple Linear Regression

    Directory of Open Access Journals (Sweden)

    A. A. Salama

    2014-09-01

    Full Text Available Since the world is full of indeterminacy, the neutrosophics found their place into contemporary research. The fundamental concepts of neutrosophic set, introduced by Smarandache. Recently, Salama et al., introduced the concept of correlation coefficient of neutrosophic data. In this paper, we introduce and study the concepts of correlation and correlation coefficient of neutrosophic data in probability spaces and study some of their properties. Also, we introduce and study the neutrosophic simple linear regression model. Possible applications to data processing are touched upon.

  4. Estimation of the simple correlation coefficient.

    Science.gov (United States)

    Shieh, Gwowen

    2010-11-01

    This article investigates some unfamiliar properties of the Pearson product-moment correlation coefficient for the estimation of simple correlation coefficient. Although Pearson's r is biased, except for limited situations, and the minimum variance unbiased estimator has been proposed in the literature, researchers routinely employ the sample correlation coefficient in their practical applications, because of its simplicity and popularity. In order to support such practice, this study examines the mean squared errors of r and several prominent formulas. The results reveal specific situations in which the sample correlation coefficient performs better than the unbiased and nearly unbiased estimators, facilitating recommendation of r as an effect size index for the strength of linear association between two variables. In addition, related issues of estimating the squared simple correlation coefficient are also considered.

  5. Self-Organized Criticality in a Simple Neuron Model Based on Scale-Free Networks

    International Nuclear Information System (INIS)

    Lin Min; Wang Gang; Chen Tianlun

    2006-01-01

    A simple model for a set of interacting idealized neurons in scale-free networks is introduced. The basic elements of the model are endowed with the main features of a neuron function. We find that our model displays power-law behavior of avalanche sizes and generates long-range temporal correlation. More importantly, we find different dynamical behavior for nodes with different connectivity in the scale-free networks.

  6. The roles of shear and cross-correlations on the fluctuation levels in simple stochastic models. Revision

    International Nuclear Information System (INIS)

    Krommes, J.A.

    1999-01-01

    Highly simplified models of random flows interacting with background microturbulence are analyzed. In the limit of very rapid velocity fluctuations, it is shown rigorously that the fluctuation level of a passively advected scalar is not controlled by the rms shear. In a model with random velocities dependent only on time, the level of cross-correlations between the flows and the background turbulence regulates the saturation level. This effect is illustrated by considering a simple stochastic-oscillator model, both exactly and with analysis and numerical solutions of the direct-interaction approximation. Implications for the understanding of self-consistent turbulence are discussed briefly

  7. Determinantal Representation of the Time-Dependent Stationary Correlation Function for the Totally Asymmetric Simple Exclusion Model

    Directory of Open Access Journals (Sweden)

    Nikolay M. Bogoliubov

    2009-04-01

    Full Text Available The basic model of the non-equilibrium low dimensional physics the so-called totally asymmetric exclusion process is related to the 'crystalline limit' (q → ∞ of the SU_q(2 quantum algebra. Using the quantum inverse scattering method we obtain the exact expression for the time-dependent stationary correlation function of the totally asymmetric simple exclusion process on a one dimensional lattice with the periodic boundary conditions.

  8. Correlation and simple linear regression.

    Science.gov (United States)

    Zou, Kelly H; Tuncali, Kemal; Silverman, Stuart G

    2003-06-01

    In this tutorial article, the concepts of correlation and regression are reviewed and demonstrated. The authors review and compare two correlation coefficients, the Pearson correlation coefficient and the Spearman rho, for measuring linear and nonlinear relationships between two continuous variables. In the case of measuring the linear relationship between a predictor and an outcome variable, simple linear regression analysis is conducted. These statistical concepts are illustrated by using a data set from published literature to assess a computed tomography-guided interventional technique. These statistical methods are important for exploring the relationships between variables and can be applied to many radiologic studies.

  9. Applications of the Simple Multi-Fluid Model to Correlations of the Vapor-Liquid Equilibrium of Refrigerant Mixtures Containing Carbon Dioxide

    Science.gov (United States)

    Akasaka, Ryo

    This study presents a simple multi-fluid model for Helmholtz energy equations of state. The model contains only three parameters, whereas rigorous multi-fluid models developed for several industrially important mixtures usually have more than 10 parameters and coefficients. Therefore, the model can be applied to mixtures where experimental data is limited. Vapor-liquid equilibrium (VLE) of the following seven mixtures have been successfully correlated with the model: CO2 + difluoromethane (R-32), CO2 + trifluoromethane (R-23), CO2 + fluoromethane (R-41), CO2 + 1,1,1,2- tetrafluoroethane (R-134a), CO2 + pentafluoroethane (R-125), CO2 + 1,1-difluoroethane (R-152a), and CO2 + dimethyl ether (DME). The best currently available equations of state for the pure refrigerants were used for the correlations. For all mixtures, average deviations in calculated bubble-point pressures from experimental values are within 2%. The simple multi-fluid model will be helpful for design and simulations of heat pumps and refrigeration systems using the mixtures as working fluid.

  10. Connecting single-stock assessment models through correlated survival

    DEFF Research Database (Denmark)

    Albertsen, Christoffer Moesgaard; Nielsen, Anders; Thygesen, Uffe Høgsbro

    2017-01-01

    times. We propose a simple alternative. In three case studies each with two stocks, we improve the single-stock models, as measured by Akaike information criterion, by adding correlation in the cohort survival. To limit the number of parameters, the correlations are parameterized through...... the corresponding partial correlations. We consider six models where the partial correlation matrix between stocks follows a band structure ranging from independent assessments to complex correlation structures. Further, a simulation study illustrates the importance of handling correlated data sufficiently...... by investigating the coverage of confidence intervals for estimated fishing mortality. The results presented will allow managers to evaluate stock statuses based on a more accurate evaluation of model output uncertainty. The methods are directly implementable for stocks with an analytical assessment and do...

  11. Correlations in Output and Overflow Traffic Processes in Simple Queues

    Directory of Open Access Journals (Sweden)

    Don McNickle

    2007-01-01

    Full Text Available We consider some simple Markov and Erlang queues with limited storage space. Although the departure processes from some such systems are known to be Poisson, they actually consist of the superposition of two complex correlated processes, the overflow process and the output process. We measure the cross-correlation between the counting processes for these two processes. It turns out that this can be positive, negative, or even zero (without implying independence. The models suggest some general principles on how big these correlations are, and when they are important. This may suggest when renewal or moment approximations to similar processes will be successful, and when they will not.

  12. Correlation of Serum Zinc Level with Simple Febrile Seizures: A Hospital based Prospective Case Control Study

    Directory of Open Access Journals (Sweden)

    Imran Gattoo

    2015-04-01

    Full Text Available Background: Febrile seizures are one of the most common neurological conditions of childhood. It seems that zinc deficiency is associated with increased risk of febrile seizures.Aim: To estimate the serum Zinc level in children with simple Febrile seizures and to find the correlation between serum zinc level and simple Febrile seizures.Materials and Methods: The proposed study was a hospital based prospective case control study which included infants and children aged between 6 months to 5 years, at Post Graduate Department of Pediatrics, (SMGS Hospital, GMC Jammu, northern India. A total of 200 infants and children fulfilling the inclusion criteria were included. Patients were divided into 100(cases in Group A with simple febrile seizure and 100(controls in Group B of children with acute febrile illness without seizure. All patients were subjected to detailed history and thorough clinical examination followed by relevant investigations.Results: Our study had slight male prepondance of 62% in cases and 58% in controls . Mean serum zinc level in cases was 61.53±15.87 ugm/dl and in controls it was 71.90+18.50 ugm/dl .Serum zinc level was found significantly low in cases of simple febrile seizures as compaired to controls ,with p value of

  13. Y-Scaling in a simple quark model

    International Nuclear Information System (INIS)

    Kumano, S.; Moniz, E.J.

    1988-01-01

    A simple quark model is used to define a nuclear pair model, that is, two composite hadrons interacting only through quark interchange and bound in an overall potential. An ''equivalent'' hadron model is developed, displaying an effective hadron-hadron interaction which is strongly repulsive. We compare the effective hadron model results with the exact quark model observables in the kinematic region of large momentum transfer, small energy transfer. The nucleon reponse function in this y-scaling region is, within the traditional frame work sensitive to the nucleon momentum distribution at large momentum. We find a surprizingly small effect of hadron substructure. Furthermore, we find in our model that a simple parametrization of modified hadron size in the bound state, motivated by the bound quark momentum distribution, is not a useful way to correlate different observables

  14. Modeling Simple Driving Tasks with a One-Boundary Diffusion Model

    Science.gov (United States)

    Ratcliff, Roger; Strayer, David

    2014-01-01

    A one-boundary diffusion model was applied to the data from two experiments in which subjects were performing a simple simulated driving task. In the first experiment, the same subjects were tested on two driving tasks using a PC-based driving simulator and the psychomotor vigilance test (PVT). The diffusion model fit the response time (RT) distributions for each task and individual subject well. Model parameters were found to correlate across tasks which suggests common component processes were being tapped in the three tasks. The model was also fit to a distracted driving experiment of Cooper and Strayer (2008). Results showed that distraction altered performance by affecting the rate of evidence accumulation (drift rate) and/or increasing the boundary settings. This provides an interpretation of cognitive distraction whereby conversing on a cell phone diverts attention from the normal accumulation of information in the driving environment. PMID:24297620

  15. Characteristics and Properties of a Simple Linear Regression Model

    Directory of Open Access Journals (Sweden)

    Kowal Robert

    2016-12-01

    Full Text Available A simple linear regression model is one of the pillars of classic econometrics. Despite the passage of time, it continues to raise interest both from the theoretical side as well as from the application side. One of the many fundamental questions in the model concerns determining derivative characteristics and studying the properties existing in their scope, referring to the first of these aspects. The literature of the subject provides several classic solutions in that regard. In the paper, a completely new design is proposed, based on the direct application of variance and its properties, resulting from the non-correlation of certain estimators with the mean, within the scope of which some fundamental dependencies of the model characteristics are obtained in a much more compact manner. The apparatus allows for a simple and uniform demonstration of multiple dependencies and fundamental properties in the model, and it does it in an intuitive manner. The results were obtained in a classic, traditional area, where everything, as it might seem, has already been thoroughly studied and discovered.

  16. Verification of simple illuminance based measures for indication of discomfort glare from windows

    DEFF Research Database (Denmark)

    Karlsen, Line Røseth; Heiselberg, Per Kvols; Bryn, Ida

    2015-01-01

    predictions of discomfort glare from windows already in the early design stage when decisions regarding the façade are taken. This study focus on verifying if simple illuminance based measures like vertical illuminance at eye level or horizontal illuminance at the desk are correlated with the perceived glare...... reported by 44 test subjects in a repeated measure design occupant survey and if the reported glare corresponds with the predictions from the simple Daylight Glare Probability (DGPs) model. Large individual variations were seen in the occupants’ assessment of glare in the present study. Yet, the results...... confirm that there is a statistically significant correlation between both vertical eye illuminance and horizontal illuminance at the desk and the occupants’ perception of glare in a perimeter zone office environment, which is promising evidence towards utilizing such simple measures for indication...

  17. Hadron structure in a simple model of quark/nuclear matter

    International Nuclear Information System (INIS)

    Horowitz, C.J.; Moniz, E.J.; Negele, J.W.

    1985-01-01

    We study a simple model for one-dimensional hadron matter with many of the essential features needed for examining the transition from nuclear to quark matter and the limitations of models based upon hadron rather than quark degrees of freedom. The dynamics are generated entirely by the quark confining force and exchange symmetry. Using Monte Carlo techniques, the ground-state energy, single-quark momentum distribution, and quark correlation function are calculated for uniform matter as a function of density. The quark confinement scale in the medium increases substantially with increasing density. This change is evident in the correlation function and momentum distribution, in qualitative agreement with the changes observed in deep-inelastic lepton scattering. Nevertheless, the ground-state energy is smooth throughout the transition to quark matter and is described remarkably well by an effective hadron theory based on a phenomenological hadron-hadron potential

  18. Comparison of Simple Versus Performance-Based Fall Prediction Models

    Directory of Open Access Journals (Sweden)

    Shekhar K. Gadkaree BS

    2015-05-01

    Full Text Available Objective: To compare the predictive ability of standard falls prediction models based on physical performance assessments with more parsimonious prediction models based on self-reported data. Design: We developed a series of fall prediction models progressing in complexity and compared area under the receiver operating characteristic curve (AUC across models. Setting: National Health and Aging Trends Study (NHATS, which surveyed a nationally representative sample of Medicare enrollees (age ≥65 at baseline (Round 1: 2011-2012 and 1-year follow-up (Round 2: 2012-2013. Participants: In all, 6,056 community-dwelling individuals participated in Rounds 1 and 2 of NHATS. Measurements: Primary outcomes were 1-year incidence of “ any fall ” and “ recurrent falls .” Prediction models were compared and validated in development and validation sets, respectively. Results: A prediction model that included demographic information, self-reported problems with balance and coordination, and previous fall history was the most parsimonious model that optimized AUC for both any fall (AUC = 0.69, 95% confidence interval [CI] = [0.67, 0.71] and recurrent falls (AUC = 0.77, 95% CI = [0.74, 0.79] in the development set. Physical performance testing provided a marginal additional predictive value. Conclusion: A simple clinical prediction model that does not include physical performance testing could facilitate routine, widespread falls risk screening in the ambulatory care setting.

  19. A simple method for identifying parameter correlations in partially observed linear dynamic models.

    Science.gov (United States)

    Li, Pu; Vu, Quoc Dong

    2015-12-14

    Parameter estimation represents one of the most significant challenges in systems biology. This is because biological models commonly contain a large number of parameters among which there may be functional interrelationships, thus leading to the problem of non-identifiability. Although identifiability analysis has been extensively studied by analytical as well as numerical approaches, systematic methods for remedying practically non-identifiable models have rarely been investigated. We propose a simple method for identifying pairwise correlations and higher order interrelationships of parameters in partially observed linear dynamic models. This is made by derivation of the output sensitivity matrix and analysis of the linear dependencies of its columns. Consequently, analytical relations between the identifiability of the model parameters and the initial conditions as well as the input functions can be achieved. In the case of structural non-identifiability, identifiable combinations can be obtained by solving the resulting homogenous linear equations. In the case of practical non-identifiability, experiment conditions (i.e. initial condition and constant control signals) can be provided which are necessary for remedying the non-identifiability and unique parameter estimation. It is noted that the approach does not consider noisy data. In this way, the practical non-identifiability issue, which is popular for linear biological models, can be remedied. Several linear compartment models including an insulin receptor dynamics model are taken to illustrate the application of the proposed approach. Both structural and practical identifiability of partially observed linear dynamic models can be clarified by the proposed method. The result of this method provides important information for experimental design to remedy the practical non-identifiability if applicable. The derivation of the method is straightforward and thus the algorithm can be easily implemented into a

  20. Introduction of a simple-model-based land surface dataset for Europe

    Science.gov (United States)

    Orth, Rene; Seneviratne, Sonia I.

    2015-04-01

    Land surface hydrology can play a crucial role during extreme events such as droughts, floods and even heat waves. We introduce in this study a new hydrological dataset for Europe that consists of soil moisture, runoff and evapotranspiration (ET). It is derived with a simple water balance model (SWBM) forced with precipitation, temperature and net radiation. The SWBM dataset extends over the period 1984-2013 with a daily time step and 0.5° × 0.5° resolution. We employ a novel calibration approach, in which we consider 300 random parameter sets chosen from an observation-based range. Using several independent validation datasets representing soil moisture (or terrestrial water content), ET and streamflow, we identify the best performing parameter set and hence the new dataset. To illustrate its usefulness, the SWBM dataset is compared against several state-of-the-art datasets (ERA-Interim/Land, MERRA-Land, GLDAS-2-Noah, simulations of the Community Land Model Version 4), using all validation datasets as reference. For soil moisture dynamics it outperforms the benchmarks. Therefore the SWBM soil moisture dataset constitutes a reasonable alternative to sparse measurements, little validated model results, or proxy data such as precipitation indices. Also in terms of runoff the SWBM dataset performs well, whereas the evaluation of the SWBM ET dataset is overall satisfactory, but the dynamics are less well captured for this variable. This highlights the limitations of the dataset, as it is based on a simple model that uses uniform parameter values. Hence some processes impacting ET dynamics may not be captured, and quality issues may occur in regions with complex terrain. Even though the SWBM is well calibrated, it cannot replace more sophisticated models; but as their calibration is a complex task the present dataset may serve as a benchmark in future. In addition we investigate the sources of skill of the SWBM dataset and find that the parameter set has a similar

  1. Boundary-layer transition prediction using a simplified correlation-based model

    Directory of Open Access Journals (Sweden)

    Xia Chenchao

    2016-02-01

    Full Text Available This paper describes a simplified transition model based on the recently developed correlation-based γ-Reθt transition model. The transport equation of transition momentum thickness Reynolds number is eliminated for simplicity, and new transition length function and critical Reynolds number correlation are proposed. The new model is implemented into an in-house computational fluid dynamics (CFD code and validated for low and high-speed flow cases, including the zero pressure flat plate, airfoils, hypersonic flat plate and double wedge. Comparisons between the simulation results and experimental data show that the boundary-layer transition phenomena can be reasonably illustrated by the new model, which gives rise to significant improvements over the fully laminar and fully turbulent results. Moreover, the new model has comparable features of accuracy and applicability when compared with the original γ-Reθt model. In the meantime, the newly proposed model takes only one transport equation of intermittency factor and requires fewer correlations, which simplifies the original model greatly. Further studies, especially on separation-induced transition flows, are required for the improvement of the new model.

  2. A simple model for indentation creep

    Science.gov (United States)

    Ginder, Ryan S.; Nix, William D.; Pharr, George M.

    2018-03-01

    A simple model for indentation creep is developed that allows one to directly convert creep parameters measured in indentation tests to those observed in uniaxial tests through simple closed-form relationships. The model is based on the expansion of a spherical cavity in a power law creeping material modified to account for indentation loading in a manner similar to that developed by Johnson for elastic-plastic indentation (Johnson, 1970). Although only approximate in nature, the simple mathematical form of the new model makes it useful for general estimation purposes or in the development of other deformation models in which a simple closed-form expression for the indentation creep rate is desirable. Comparison to a more rigorous analysis which uses finite element simulation for numerical evaluation shows that the new model predicts uniaxial creep rates within a factor of 2.5, and usually much better than this, for materials creeping with stress exponents in the range 1 ≤ n ≤ 7. The predictive capabilities of the model are evaluated by comparing it to the more rigorous analysis and several sets of experimental data in which both the indentation and uniaxial creep behavior have been measured independently.

  3. Valid statistical approaches for analyzing sholl data: Mixed effects versus simple linear models.

    Science.gov (United States)

    Wilson, Machelle D; Sethi, Sunjay; Lein, Pamela J; Keil, Kimberly P

    2017-03-01

    The Sholl technique is widely used to quantify dendritic morphology. Data from such studies, which typically sample multiple neurons per animal, are often analyzed using simple linear models. However, simple linear models fail to account for intra-class correlation that occurs with clustered data, which can lead to faulty inferences. Mixed effects models account for intra-class correlation that occurs with clustered data; thus, these models more accurately estimate the standard deviation of the parameter estimate, which produces more accurate p-values. While mixed models are not new, their use in neuroscience has lagged behind their use in other disciplines. A review of the published literature illustrates common mistakes in analyses of Sholl data. Analysis of Sholl data collected from Golgi-stained pyramidal neurons in the hippocampus of male and female mice using both simple linear and mixed effects models demonstrates that the p-values and standard deviations obtained using the simple linear models are biased downwards and lead to erroneous rejection of the null hypothesis in some analyses. The mixed effects approach more accurately models the true variability in the data set, which leads to correct inference. Mixed effects models avoid faulty inference in Sholl analysis of data sampled from multiple neurons per animal by accounting for intra-class correlation. Given the widespread practice in neuroscience of obtaining multiple measurements per subject, there is a critical need to apply mixed effects models more widely. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. Comparing Free-Free and Shaker Table Model Correlation Methods Using Jim Beam

    Science.gov (United States)

    Ristow, James; Smith, Kenneth Wayne, Jr.; Johnson, Nathaniel; Kinney, Jackson

    2018-01-01

    Finite element model correlation as part of a spacecraft program has always been a challenge. For any NASA mission, the coupled system response of the spacecraft and launch vehicle can be determined analytically through a Coupled Loads Analysis (CLA), as it is not possible to test the spacecraft and launch vehicle coupled system before launch. The value of the CLA is highly dependent on the accuracy of the frequencies and mode shapes extracted from the spacecraft model. NASA standards require the spacecraft model used in the final Verification Loads Cycle to be correlated by either a modal test or by comparison of the model with Frequency Response Functions (FRFs) obtained during the environmental qualification test. Due to budgetary and time constraints, most programs opt to correlate the spacecraft dynamic model during the environmental qualification test, conducted on a large shaker table. For any model correlation effort, the key has always been finding a proper definition of the boundary conditions. This paper is a correlation case study to investigate the difference in responses of a simple structure using a free-free boundary, a fixed boundary on the shaker table, and a base-drive vibration test, all using identical instrumentation. The NAVCON Jim Beam test structure, featured in the IMAC round robin modal test of 2009, was selected as a simple, well recognized and well characterized structure to conduct this investigation. First, a free-free impact modal test of the Jim Beam was done as an experimental control. Second, the Jim Beam was mounted to a large 20,000 lbf shaker, and an impact modal test in this fixed configuration was conducted. Lastly, a vibration test of the Jim Beam was conducted on the shaker table. The free-free impact test, the fixed impact test, and the base-drive test were used to assess the effect of the shaker modes, evaluate the validity of fixed-base modeling assumptions, and compare final model correlation results between these

  5. Spatio-temporal environmental correlation and population variability in simple metacommunities.

    Directory of Open Access Journals (Sweden)

    Lasse Ruokolainen

    Full Text Available Natural populations experience environmental conditions that vary across space and over time. This variation is often correlated between localities depending on the geographical separation between them, and different species can respond to local environmental fluctuations similarly or differently, depending on their adaptation. How this emerging structure in environmental correlation (between-patches and between-species affects spatial community dynamics is an open question. This paper aims at a general understanding of the interactions between the environmental correlation structure and population dynamics in spatial networks of local communities (metacommunities, by studying simple two-patch, two-species systems. Three different pairs of interspecific interactions are considered: competition, consumer-resource interaction, and host-parasitoid interaction. While the results paint a relatively complex picture of the effect of environmental correlation, the interaction between environmental forcing, dispersal, and local interactions can be understood via two mechanisms. While increasing between-patch environmental correlation couples immigration and local densities (destabilising effect, the coupling between local populations under increased between-species environmental correlation can either amplify or dampen population fluctuations, depending on the patterns in density dependence. This work provides a unifying framework for modelling stochastic metacommunities, and forms a foundation for a better understanding of population responses to environmental fluctuations in natural systems.

  6. The Monash University Interactive Simple Climate Model

    Science.gov (United States)

    Dommenget, D.

    2013-12-01

    The Monash university interactive simple climate model is a web-based interface that allows students and the general public to explore the physical simulation of the climate system with a real global climate model. It is based on the Globally Resolved Energy Balance (GREB) model, which is a climate model published by Dommenget and Floeter [2011] in the international peer review science journal Climate Dynamics. The model simulates most of the main physical processes in the climate system in a very simplistic way and therefore allows very fast and simple climate model simulations on a normal PC computer. Despite its simplicity the model simulates the climate response to external forcings, such as doubling of the CO2 concentrations very realistically (similar to state of the art climate models). The Monash simple climate model web-interface allows you to study the results of more than a 2000 different model experiments in an interactive way and it allows you to study a number of tutorials on the interactions of physical processes in the climate system and solve some puzzles. By switching OFF/ON physical processes you can deconstruct the climate and learn how all the different processes interact to generate the observed climate and how the processes interact to generate the IPCC predicted climate change for anthropogenic CO2 increase. The presentation will illustrate how this web-base tool works and what are the possibilities in teaching students with this tool are.

  7. Simple heat transfer correlations for turbulent tube flow

    Directory of Open Access Journals (Sweden)

    Taler Dawid

    2017-01-01

    Full Text Available The paper presents three power-type correlations of a simple form, which are valid for Reynolds numbers range from 3·103 ≤ Re ≤ 106, and for three different ranges of Prandtl number: 0.1 ≤ Pr ≤ 1.0, 1.0 < Pr ≤ 3.0, and 3.0 < Pr ≤ 103. Heat transfer correlations developed in the paper were compared with experimental results available in the literature. The comparisons performed in the paper confirm the good accuracy of the proposed correlations. They are also much simpler compared with the relationship of Gnielinski, which is also widely used in the heat transfer calculations.

  8. Characteristic analysis on UAV-MIMO channel based on normalized correlation matrix.

    Science.gov (United States)

    Gao, Xi jun; Chen, Zi li; Hu, Yong Jiang

    2014-01-01

    Based on the three-dimensional GBSBCM (geometrically based double bounce cylinder model) channel model of MIMO for unmanned aerial vehicle (UAV), the simple form of UAV space-time-frequency channel correlation function which includes the LOS, SPE, and DIF components is presented. By the methods of channel matrix decomposition and coefficient normalization, the analytic formula of UAV-MIMO normalized correlation matrix is deduced. This formula can be used directly to analyze the condition number of UAV-MIMO channel matrix, the channel capacity, and other characteristic parameters. The simulation results show that this channel correlation matrix can be applied to describe the changes of UAV-MIMO channel characteristics under different parameter settings comprehensively. This analysis method provides a theoretical basis for improving the transmission performance of UAV-MIMO channel. The development of MIMO technology shows practical application value in the field of UAV communication.

  9. A Simple Accounting-based Valuation Model for the Debt Tax Shield

    Directory of Open Access Journals (Sweden)

    Andreas Scholze

    2010-05-01

    Full Text Available This paper describes a simple way to integrate the debt tax shield into an accounting-based valuation model. The market value of equity is determined by forecasting residual operating income, which is calculated by charging operating income for the operating assets at a required return that accounts for the tax benefit that comes from borrowing to raise cash for the operations. The model assumes that the firm maintains a deterministic financial leverage ratio, which tends to converge quickly to typical steady-state levels over time. From a practical point of view, this characteristic is of particular help, because it allows a continuing value calculation at the end of a short forecast period.

  10. Learning from correlated patterns by simple perceptrons

    Energy Technology Data Exchange (ETDEWEB)

    Shinzato, Takashi; Kabashima, Yoshiyuki [Department of Computational Intelligence and Systems Science, Tokyo Institute of Technology, Yokohama 226-8502 (Japan)], E-mail: shinzato@sp.dis.titech.ac.jp, E-mail: kaba@dis.titech.ac.jp

    2009-01-09

    Learning behavior of simple perceptrons is analyzed for a teacher-student scenario in which output labels are provided by a teacher network for a set of possibly correlated input patterns, and such that the teacher and student networks are of the same type. Our main concern is the effect of statistical correlations among the input patterns on learning performance. For this purpose, we extend to the teacher-student scenario a methodology for analyzing randomly labeled patterns recently developed in Shinzato and Kabashima 2008 J. Phys. A: Math. Theor. 41 324013. This methodology is used for analyzing situations in which orthogonality of the input patterns is enhanced in order to optimize the learning performance.

  11. Learning from correlated patterns by simple perceptrons

    Science.gov (United States)

    Shinzato, Takashi; Kabashima, Yoshiyuki

    2009-01-01

    Learning behavior of simple perceptrons is analyzed for a teacher-student scenario in which output labels are provided by a teacher network for a set of possibly correlated input patterns, and such that the teacher and student networks are of the same type. Our main concern is the effect of statistical correlations among the input patterns on learning performance. For this purpose, we extend to the teacher-student scenario a methodology for analyzing randomly labeled patterns recently developed in Shinzato and Kabashima 2008 J. Phys. A: Math. Theor. 41 324013. This methodology is used for analyzing situations in which orthogonality of the input patterns is enhanced in order to optimize the learning performance.

  12. Learning from correlated patterns by simple perceptrons

    International Nuclear Information System (INIS)

    Shinzato, Takashi; Kabashima, Yoshiyuki

    2009-01-01

    Learning behavior of simple perceptrons is analyzed for a teacher-student scenario in which output labels are provided by a teacher network for a set of possibly correlated input patterns, and such that the teacher and student networks are of the same type. Our main concern is the effect of statistical correlations among the input patterns on learning performance. For this purpose, we extend to the teacher-student scenario a methodology for analyzing randomly labeled patterns recently developed in Shinzato and Kabashima 2008 J. Phys. A: Math. Theor. 41 324013. This methodology is used for analyzing situations in which orthogonality of the input patterns is enhanced in order to optimize the learning performance

  13. Simple model of the arms race

    International Nuclear Information System (INIS)

    Zane, L.I.

    1982-01-01

    A simple model of a two-party arms race is developed based on the principle that the race will continue so long as either side can unleash an effective first strike against the other side. The model is used to examine how secrecy, the ABM, MIRV-ing, and an MX system affect the arms race

  14. A Simple Model for Complex Fabrication of MEMS based Pressure Sensor: A Challenging Approach

    Directory of Open Access Journals (Sweden)

    Himani SHARMA

    2010-08-01

    Full Text Available In this paper we have presented the simple model for complex fabrication of MEMS based absolute micro pressure sensor. This kind of modeling is extremely useful for determining its complexity in fabrication steps and provides complete information about process sequence to be followed during manufacturing. Therefore, the need for test iteration decreases and cost, time can be reduced significantly. By using DevEdit tool (part of SILVACO tool, a behavioral model of pressure sensor have been presented and implemented.

  15. Swarming behavior of simple model squirmers

    International Nuclear Information System (INIS)

    Thutupalli, Shashi; Seemann, Ralf; Herminghaus, Stephan

    2011-01-01

    We have studied experimentally the collective behavior of self-propelling liquid droplets, which closely mimic the locomotion of some protozoal organisms, the so-called squirmers. For the sake of simplicity, we concentrate on quasi-two-dimensional (2D) settings, although our swimmers provide a fully 3D propulsion scheme. At an areal density of 0.46, we find strong polar correlation of the locomotion velocities of neighboring droplets, which decays over less than one droplet diameter. When the areal density is increased to 0.78, distinct peaks show up in the angular correlation function, which point to the formation of ordered rafts. This shows that pronounced textures, beyond what has been seen in simulations so far, may show up in crowds of simple model squirmers, despite the simplicity of their (purely physical) mutual interaction.

  16. Swarming behavior of simple model squirmers

    Energy Technology Data Exchange (ETDEWEB)

    Thutupalli, Shashi; Seemann, Ralf; Herminghaus, Stephan, E-mail: shashi.thutupalli@ds.mpg.de, E-mail: stephan.herminghaus@ds.mpg.de [Max Planck Institute for Dynamics and Self-Organization, Bunsenstrasse 10, 37073 Goettingen (Germany)

    2011-07-15

    We have studied experimentally the collective behavior of self-propelling liquid droplets, which closely mimic the locomotion of some protozoal organisms, the so-called squirmers. For the sake of simplicity, we concentrate on quasi-two-dimensional (2D) settings, although our swimmers provide a fully 3D propulsion scheme. At an areal density of 0.46, we find strong polar correlation of the locomotion velocities of neighboring droplets, which decays over less than one droplet diameter. When the areal density is increased to 0.78, distinct peaks show up in the angular correlation function, which point to the formation of ordered rafts. This shows that pronounced textures, beyond what has been seen in simulations so far, may show up in crowds of simple model squirmers, despite the simplicity of their (purely physical) mutual interaction.

  17. Security of statistical data bases: invasion of privacy through attribute correlational modeling

    Energy Technology Data Exchange (ETDEWEB)

    Palley, M.A.

    1985-01-01

    This study develops, defines, and applies a statistical technique for the compromise of confidential information in a statistical data base. Attribute Correlational Modeling (ACM) recognizes that the information contained in a statistical data base represents real world statistical phenomena. As such, ACM assumes correlational behavior among the database attributes. ACM proceeds to compromise confidential information through creation of a regression model, where the confidential attribute is treated as the dependent variable. The typical statistical data base may preclude the direct application of regression. In this scenario, the research introduces the notion of a synthetic data base, created through legitimate queries of the actual data base, and through proportional random variation of responses to these queries. The synthetic data base is constructed to resemble the actual data base as closely as possible in a statistical sense. ACM then applies regression analysis to the synthetic data base, and utilizes the derived model to estimate confidential information in the actual database.

  18. Data Fusion of Gridded Snow Products Enhanced with Terrain Covariates and a Simple Snow Model

    Science.gov (United States)

    Snauffer, A. M.; Hsieh, W. W.; Cannon, A. J.

    2017-12-01

    Hydrologic planning requires accurate estimates of regional snow water equivalent (SWE), particularly areas with hydrologic regimes dominated by spring melt. While numerous gridded data products provide such estimates, accurate representations are particularly challenging under conditions of mountainous terrain, heavy forest cover and large snow accumulations, contexts which in many ways define the province of British Columbia (BC), Canada. One promising avenue of improving SWE estimates is a data fusion approach which combines field observations with gridded SWE products and relevant covariates. A base artificial neural network (ANN) was constructed using three of the best performing gridded SWE products over BC (ERA-Interim/Land, MERRA and GLDAS-2) and simple location and time covariates. This base ANN was then enhanced to include terrain covariates (slope, aspect and Terrain Roughness Index, TRI) as well as a simple 1-layer energy balance snow model driven by gridded bias-corrected ANUSPLIN temperature and precipitation values. The ANN enhanced with all aforementioned covariates performed better than the base ANN, but most of the skill improvement was attributable to the snow model with very little contribution from the terrain covariates. The enhanced ANN improved station mean absolute error (MAE) by an average of 53% relative to the composing gridded products over the province. Interannual peak SWE correlation coefficient was found to be 0.78, an improvement of 0.05 to 0.18 over the composing products. This nonlinear approach outperformed a comparable multiple linear regression (MLR) model by 22% in MAE and 0.04 in interannual correlation. The enhanced ANN has also been shown to estimate better than the Variable Infiltration Capacity (VIC) hydrologic model calibrated and run for four BC watersheds, improving MAE by 22% and correlation by 0.05. The performance improvements of the enhanced ANN are statistically significant at the 5% level across the province and

  19. Correlators in tensor models from character calculus

    Directory of Open Access Journals (Sweden)

    A. Mironov

    2017-11-01

    Full Text Available We explain how the calculations of [20], which provided the first evidence for non-trivial structures of Gaussian correlators in tensor models, are efficiently performed with the help of the (Hurwitz character calculus. This emphasizes a close similarity between technical methods in matrix and tensor models and supports a hope to understand the emerging structures in very similar terms. We claim that the 2m-fold Gaussian correlators of rank r tensors are given by r-linear combinations of dimensions with the Young diagrams of size m. The coefficients are made from the characters of the symmetric group Sm and their exact form depends on the choice of the correlator and on the symmetries of the model. As the simplest application of this new knowledge, we provide simple expressions for correlators in the Aristotelian tensor model as tri-linear combinations of dimensions.

  20. Correlation between the model accuracy and model-based SOC estimation

    International Nuclear Information System (INIS)

    Wang, Qianqian; Wang, Jiao; Zhao, Pengju; Kang, Jianqiang; Yan, Few; Du, Changqing

    2017-01-01

    State-of-charge (SOC) estimation is a core technology for battery management systems. Considerable progress has been achieved in the study of SOC estimation algorithms, especially the algorithm on the basis of Kalman filter to meet the increasing demand of model-based battery management systems. The Kalman filter weakens the influence of white noise and initial error during SOC estimation but cannot eliminate the existing error of the battery model itself. As such, the accuracy of SOC estimation is directly related to the accuracy of the battery model. Thus far, the quantitative relationship between model accuracy and model-based SOC estimation remains unknown. This study summarizes three equivalent circuit lithium-ion battery models, namely, Thevenin, PNGV, and DP models. The model parameters are identified through hybrid pulse power characterization test. The three models are evaluated, and SOC estimation conducted by EKF-Ah method under three operating conditions are quantitatively studied. The regression and correlation of the standard deviation and normalized RMSE are studied and compared between the model error and the SOC estimation error. These parameters exhibit a strong linear relationship. Results indicate that the model accuracy affects the SOC estimation accuracy mainly in two ways: dispersion of the frequency distribution of the error and the overall level of the error. On the basis of the relationship between model error and SOC estimation error, our study provides a strategy for selecting a suitable cell model to meet the requirements of SOC precision using Kalman filter.

  1. Combinatorial structures to modeling simple games and applications

    Science.gov (United States)

    Molinero, Xavier

    2017-09-01

    We connect three different topics: combinatorial structures, game theory and chemistry. In particular, we establish the bases to represent some simple games, defined as influence games, and molecules, defined from atoms, by using combinatorial structures. First, we characterize simple games as influence games using influence graphs. It let us to modeling simple games as combinatorial structures (from the viewpoint of structures or graphs). Second, we formally define molecules as combinations of atoms. It let us to modeling molecules as combinatorial structures (from the viewpoint of combinations). It is open to generate such combinatorial structures using some specific techniques as genetic algorithms, (meta-)heuristics algorithms and parallel programming, among others.

  2. Pseudoclassical approach to electron and ion density correlations in simple liquid metals

    International Nuclear Information System (INIS)

    Vericat, F.; Tosi, M.P.; Pastore, G.

    1986-04-01

    Electron-electron and electron-ion structural correlations in simple liquid metals are treated by using effective pair potentials to incorporate quantal effects into a pseudoclassical description of the electron fluid. An effective pair potential between simultaneous electron density fluctuations is first constructed from known properties of the degenerate jellium model, which are the plasmon sum rule, the Kimball-Niklasson relation and Yasuhara's values of the electron pair distribution function at contact. An analytic expression is thereby obtained in the Debye-Hueckel approximation for the electronic structure factor in jellium over a range of density appropriate to metals, with results which compare favourably with those of fully quantal evaluations. A simple pseudoclassical model is then set up for a liquid metal: this involves a model of charged hard spheres for the ion-ion potential and an empty core model for the electron-ion potential, the Coulombic tails being scaled as required by the relation between the long-wavelength partial structure factors and the isothermal compressibility of the metal. The model is solved analytically by a pseudoclassical linear response treatment of the electron-ion coupling and numerical results are reported for partial structure factors in liquid sodium and liquid beryllium. Contact is made for the latter system with data on the electron-electron structure factor in the crystal from inelastic X-ray scattering experiments of Eisenberger, Marra and Brown. (author)

  3. A simple dynamic rising nuclear cloud based model of ground radioactive fallout for atmospheric nuclear explosion

    International Nuclear Information System (INIS)

    Zheng Yi

    2008-01-01

    A simple dynamic rising nuclear cloud based model for atmospheric nuclear explosion radioactive prediction was presented. The deposition of particles and initial cloud radius changing with time before the cloud stabilization was considered. Large-scale relative diffusion theory was used after cloud stabilization. The model was considered reasonable and dependable in comparison with four U.S. nuclear test cases and DELFIC model results. (authors)

  4. Modeling Surface Climate in US Cities Using Simple Biosphere Model Sib2

    Science.gov (United States)

    Zhang, Ping; Bounoua, Lahouari; Thome, Kurtis; Wolfe, Robert; Imhoff, Marc

    2015-01-01

    We combine Landsat- and the Moderate Resolution Imaging Spectroradiometer (MODIS)-based products in the Simple Biosphere model (SiB2) to assess the effects of urbanized land on the continental US (CONUS) surface climate. Using National Land Cover Database (NLCD) Impervious Surface Area (ISA), we define more than 300 urban settlements and their surrounding suburban and rural areas over the CONUS. The SiB2 modeled Gross Primary Production (GPP) over the CONUS of 7.10 PgC (1 PgC= 10(exp 15) grams of Carbon) is comparable to the MODIS improved GPP of 6.29 PgC. At state level, SiB2 GPP is highly correlated with MODIS GPP with a correlation coefficient of 0.94. An increasing horizontal GPP gradient is shown from the urban out to the rural area, with, on average, rural areas fixing 30% more GPP than urbans. Cities built in forested biomes have stronger UHI magnitude than those built in short vegetation with low biomass. Mediterranean climate cities have a stronger UHI in wet season than dry season. Our results also show that for urban areas built within forests, 39% of the precipitation is discharged as surface runoff during summer versus 23% in rural areas.

  5. SimpleBox 4.0: Improving the model while keeping it simple….

    Science.gov (United States)

    Hollander, Anne; Schoorl, Marian; van de Meent, Dik

    2016-04-01

    Chemical behavior in the environment is often modeled with multimedia fate models. SimpleBox is one often-used multimedia fate model, firstly developed in 1986. Since then, two updated versions were published. Based on recent scientific developments and experience with SimpleBox 3.0, a new version of SimpleBox was developed and is made public here: SimpleBox 4.0. In this new model, eight major changes were implemented: removal of the local scale and vegetation compartments, addition of lake compartments and deep ocean compartments (including the thermohaline circulation), implementation of intermittent rain instead of drizzle and of depth dependent soil concentrations, adjustment of the partitioning behavior for organic acids and bases as well as of the value for enthalpy of vaporization. In this paper, the effects of the model changes in SimpleBox 4.0 on the predicted steady-state concentrations of chemical substances were explored for different substance groups (neutral organic substances, acids, bases, metals) in a standard emission scenario. In general, the largest differences between the predicted concentrations in the new and the old model are caused by the implementation of layered ocean compartments. Undesirable high model complexity caused by vegetation compartments and a local scale were removed to enlarge the simplicity and user friendliness of the model. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. Deep Correlated Holistic Metric Learning for Sketch-Based 3D Shape Retrieval.

    Science.gov (United States)

    Dai, Guoxian; Xie, Jin; Fang, Yi

    2018-07-01

    How to effectively retrieve desired 3D models with simple queries is a long-standing problem in computer vision community. The model-based approach is quite straightforward but nontrivial, since people could not always have the desired 3D query model available by side. Recently, large amounts of wide-screen electronic devices are prevail in our daily lives, which makes the sketch-based 3D shape retrieval a promising candidate due to its simpleness and efficiency. The main challenge of sketch-based approach is the huge modality gap between sketch and 3D shape. In this paper, we proposed a novel deep correlated holistic metric learning (DCHML) method to mitigate the discrepancy between sketch and 3D shape domains. The proposed DCHML trains two distinct deep neural networks (one for each domain) jointly, which learns two deep nonlinear transformations to map features from both domains into a new feature space. The proposed loss, including discriminative loss and correlation loss, aims to increase the discrimination of features within each domain as well as the correlation between different domains. In the new feature space, the discriminative loss minimizes the intra-class distance of the deep transformed features and maximizes the inter-class distance of the deep transformed features to a large margin within each domain, while the correlation loss focused on mitigating the distribution discrepancy across different domains. Different from existing deep metric learning methods only with loss at the output layer, our proposed DCHML is trained with loss at both hidden layer and output layer to further improve the performance by encouraging features in the hidden layer also with desired properties. Our proposed method is evaluated on three benchmarks, including 3D Shape Retrieval Contest 2013, 2014, and 2016 benchmarks, and the experimental results demonstrate the superiority of our proposed method over the state-of-the-art methods.

  7. Locally Simple Models Construction: Methodology and Practice

    Directory of Open Access Journals (Sweden)

    I. A. Kazakov

    2017-12-01

    Full Text Available One of the most notable trends associated with the Fourth industrial revolution is a significant strengthening of the role played by semantic methods. They are engaged in artificial intelligence means, knowledge mining in huge flows of big data, robotization, and in the internet of things. Smart contracts also can be mentioned here, although the ’intelligence’ of smart contracts still needs to be seriously elaborated. These trends should inevitably lead to an increased role of logical methods working with semantics, and significantly expand the scope of their application in practice. However, there are a number of problems that hinder this process. We are developing an approach, which makes the application of logical modeling efficient in some important areas. The approach is based on the concept of locally simple models and is primarily focused on solving tasks in the management of enterprises, organizations, governing bodies. The most important feature of locally simple models is their ability to replace software systems. Replacement of programming by modeling gives huge advantages, for instance, it dramatically reduces development and support costs. Modeling, unlike programming, preserves the explicit semantics of models allowing integration with artificial intelligence and robots. In addition, models are much more understandable to general people than programs. In this paper we propose the implementation of the concept of locally simple modeling on the basis of so-called document models, which has been developed by us earlier. It is shown that locally simple modeling is realized through document models with finite submodel coverages. In the second part of the paper an example of using document models for solving a management problem of real complexity is demonstrated.

  8. Pair correlation function decay in models of simple fluids that contain dispersion interactions.

    Science.gov (United States)

    Evans, R; Henderson, J R

    2009-11-25

    We investigate the intermediate-and longest-range decay of the total pair correlation function h(r) in model fluids where the inter-particle potential decays as -r(-6), as is appropriate to real fluids in which dispersion forces govern the attraction between particles. It is well-known that such interactions give rise to a term in q(3) in the expansion of [Formula: see text], the Fourier transform of the direct correlation function. Here we show that the presence of the r(-6) tail changes significantly the analytic structure of [Formula: see text] from that found in models where the inter-particle potential is short ranged. In particular the pure imaginary pole at q = iα(0), which generates monotonic-exponential decay of rh(r) in the short-ranged case, is replaced by a complex (pseudo-exponential) pole at q = iα(0)+α(1) whose real part α(1) is negative and generally very small in magnitude. Near the critical point α(1)∼-α(0)(2) and we show how classical Ornstein-Zernike behaviour of the pair correlation function is recovered on approaching the mean-field critical point. Explicit calculations, based on the random phase approximation, enable us to demonstrate the accuracy of asymptotic formulae for h(r) in all regions of the phase diagram and to determine a pseudo-Fisher-Widom (pFW) line. On the high density side of this line, intermediate-range decay of rh(r) is exponentially damped-oscillatory and the ultimate long-range decay is power-law, proportional to r(-6), whereas on the low density side this damped-oscillatory decay is sub-dominant to both monotonic-exponential and power-law decay. Earlier analyses did not identify the pseudo-exponential pole and therefore the existence of the pFW line. Our results enable us to write down the generic wetting potential for a 'real' fluid exhibiting both short-ranged and dispersion interactions. The monotonic-exponential decay of correlations associated with the pseudo-exponential pole introduces additional terms into

  9. Comparison of co-expression measures: mutual information, correlation, and model based indices.

    Science.gov (United States)

    Song, Lin; Langfelder, Peter; Horvath, Steve

    2012-12-09

    Co-expression measures are often used to define networks among genes. Mutual information (MI) is often used as a generalized correlation measure. It is not clear how much MI adds beyond standard (robust) correlation measures or regression model based association measures. Further, it is important to assess what transformations of these and other co-expression measures lead to biologically meaningful modules (clusters of genes). We provide a comprehensive comparison between mutual information and several correlation measures in 8 empirical data sets and in simulations. We also study different approaches for transforming an adjacency matrix, e.g. using the topological overlap measure. Overall, we confirm close relationships between MI and correlation in all data sets which reflects the fact that most gene pairs satisfy linear or monotonic relationships. We discuss rare situations when the two measures disagree. We also compare correlation and MI based approaches when it comes to defining co-expression network modules. We show that a robust measure of correlation (the biweight midcorrelation transformed via the topological overlap transformation) leads to modules that are superior to MI based modules and maximal information coefficient (MIC) based modules in terms of gene ontology enrichment. We present a function that relates correlation to mutual information which can be used to approximate the mutual information from the corresponding correlation coefficient. We propose the use of polynomial or spline regression models as an alternative to MI for capturing non-linear relationships between quantitative variables. The biweight midcorrelation outperforms MI in terms of elucidating gene pairwise relationships. Coupled with the topological overlap matrix transformation, it often leads to more significantly enriched co-expression modules. Spline and polynomial networks form attractive alternatives to MI in case of non-linear relationships. Our results indicate that MI

  10. Complex versus simple models: ion-channel cardiac toxicity prediction.

    Science.gov (United States)

    Mistry, Hitesh B

    2018-01-01

    There is growing interest in applying detailed mathematical models of the heart for ion-channel related cardiac toxicity prediction. However, a debate as to whether such complex models are required exists. Here an assessment in the predictive performance between two established large-scale biophysical cardiac models and a simple linear model B net was conducted. Three ion-channel data-sets were extracted from literature. Each compound was designated a cardiac risk category using two different classification schemes based on information within CredibleMeds. The predictive performance of each model within each data-set for each classification scheme was assessed via a leave-one-out cross validation. Overall the B net model performed equally as well as the leading cardiac models in two of the data-sets and outperformed both cardiac models on the latest. These results highlight the importance of benchmarking complex versus simple models but also encourage the development of simple models.

  11. Complex versus simple models: ion-channel cardiac toxicity prediction

    Directory of Open Access Journals (Sweden)

    Hitesh B. Mistry

    2018-02-01

    Full Text Available There is growing interest in applying detailed mathematical models of the heart for ion-channel related cardiac toxicity prediction. However, a debate as to whether such complex models are required exists. Here an assessment in the predictive performance between two established large-scale biophysical cardiac models and a simple linear model Bnet was conducted. Three ion-channel data-sets were extracted from literature. Each compound was designated a cardiac risk category using two different classification schemes based on information within CredibleMeds. The predictive performance of each model within each data-set for each classification scheme was assessed via a leave-one-out cross validation. Overall the Bnet model performed equally as well as the leading cardiac models in two of the data-sets and outperformed both cardiac models on the latest. These results highlight the importance of benchmarking complex versus simple models but also encourage the development of simple models.

  12. On the Entropy Based Associative Memory Model with Higher-Order Correlations

    Directory of Open Access Journals (Sweden)

    Masahiro Nakagawa

    2010-01-01

    Full Text Available In this paper, an entropy based associative memory model will be proposed and applied to memory retrievals with an orthogonal learning model so as to compare with the conventional model based on the quadratic Lyapunov functional to be minimized during the retrieval process. In the present approach, the updating dynamics will be constructed on the basis of the entropy minimization strategy which may be reduced asymptotically to the above-mentioned conventional dynamics as a special case ignoring the higher-order correlations. According to the introduction of the entropy functional, one may involve higer-order correlation effects between neurons in a self-contained manner without any heuristic coupling coefficients as in the conventional manner. In fact we shall show such higher order coupling tensors are to be uniquely determined in the framework of the entropy based approach. From numerical results, it will be found that the presently proposed novel approach realizes much larger memory capacity than that of the quadratic Lyapunov functional approach, e.g., associatron.

  13. RSMASS: A simple model for estimating reactor and shield masses

    International Nuclear Information System (INIS)

    Marshall, A.C.; Aragon, J.; Gallup, D.

    1987-01-01

    A simple mathematical model (RSMASS) has been developed to provide rapid estimates of reactor and shield masses for space-based reactor power systems. Approximations are used rather than correlations or detailed calculations to estimate the reactor fuel mass and the masses of the moderator, structure, reflector, pressure vessel, miscellaneous components, and the reactor shield. The fuel mass is determined either by neutronics limits, thermal/hydraulic limits, or fuel damage limits, whichever yields the largest mass. RSMASS requires the reactor power and energy, 24 reactor parameters, and 20 shield parameters to be specified. This parametric approach should be applicable to a very broad range of reactor types. Reactor and shield masses calculated by RSMASS were found to be in good agreement with the masses obtained from detailed calculations

  14. Complexity-aware simple modeling.

    Science.gov (United States)

    Gómez-Schiavon, Mariana; El-Samad, Hana

    2018-02-26

    Mathematical models continue to be essential for deepening our understanding of biology. On one extreme, simple or small-scale models help delineate general biological principles. However, the parsimony of detail in these models as well as their assumption of modularity and insulation make them inaccurate for describing quantitative features. On the other extreme, large-scale and detailed models can quantitatively recapitulate a phenotype of interest, but have to rely on many unknown parameters, making them often difficult to parse mechanistically and to use for extracting general principles. We discuss some examples of a new approach-complexity-aware simple modeling-that can bridge the gap between the small-scale and large-scale approaches. Copyright © 2018 Elsevier Ltd. All rights reserved.

  15. Quantum spin correction scheme based on spin-correlation functional for Kohn-Sham spin density functional theory

    International Nuclear Information System (INIS)

    Yamanaka, Shusuke; Takeda, Ryo; Nakata, Kazuto; Takada, Toshikazu; Shoji, Mitsuo; Kitagawa, Yasutaka; Yamaguchi, Kizashi

    2007-01-01

    We present a simple quantum correction scheme for ab initio Kohn-Sham spin density functional theory (KS-SDFT). This scheme is based on a mapping from ab initio results to a Heisenberg model Hamiltonian. The effective exchange integral is estimated by using energies and spin correlation functionals calculated by ab initio KS-SDFT. The quantum-corrected spin-correlation functional is open to be designed to cover specific quantum spin fluctuations. In this article, we present a simple correction for dinuclear compounds having multiple bonds. The computational results are discussed in relation to multireference (MR) DFT, by which we treat the quantum many-body effects explicitly

  16. Maintenance of algal endosymbionts in Paramecium bursaria: a simple model based on population dynamics.

    Science.gov (United States)

    Iwai, Sosuke; Fujiwara, Kenji; Tamura, Takuro

    2016-09-01

    Algal endosymbiosis is widely distributed in eukaryotes including many protists and metazoans, and plays important roles in aquatic ecosystems, combining phagotrophy and phototrophy. To maintain a stable symbiotic relationship, endosymbiont population size in the host must be properly regulated and maintained at a constant level; however, the mechanisms underlying the maintenance of algal endosymbionts are still largely unknown. Here we investigate the population dynamics of the unicellular ciliate Paramecium bursaria and its Chlorella-like algal endosymbiont under various experimental conditions in a simple culture system. Our results suggest that endosymbiont population size in P. bursaria was not regulated by active processes such as cell division coupling between the two organisms, or partitioning of the endosymbionts at host cell division. Regardless, endosymbiont population size was eventually adjusted to a nearly constant level once cells were grown with light and nutrients. To explain this apparent regulation of population size, we propose a simple mechanism based on the different growth properties (specifically the nutrient requirements) of the two organisms, and based from this develop a mathematical model to describe the population dynamics of host and endosymbiont. The proposed mechanism and model may provide a basis for understanding the maintenance of algal endosymbionts. © 2015 Society for Applied Microbiology and John Wiley & Sons Ltd.

  17. Simple Tidal Prism Models Revisited

    Science.gov (United States)

    Luketina, D.

    1998-01-01

    Simple tidal prism models for well-mixed estuaries have been in use for some time and are discussed in most text books on estuaries. The appeal of this model is its simplicity. However, there are several flaws in the logic behind the model. These flaws are pointed out and a more theoretically correct simple tidal prism model is derived. In doing so, it is made clear which effects can, in theory, be neglected and which can not.

  18. Correlation of spacecraft thermal mathematical models to reference data

    Science.gov (United States)

    Torralbo, Ignacio; Perez-Grande, Isabel; Sanz-Andres, Angel; Piqueras, Javier

    2018-03-01

    Model-to-test correlation is a frequent problem in spacecraft-thermal control design. The idea is to determine the values of the parameters of the thermal mathematical model (TMM) that allows reaching a good fit between the TMM results and test data, in order to reduce the uncertainty of the mathematical model. Quite often, this task is performed manually, mainly because a good engineering knowledge and experience is needed to reach a successful compromise, but the use of a mathematical tool could facilitate this work. The correlation process can be considered as the minimization of the error of the model results with regard to the reference data. In this paper, a simple method is presented suitable to solve the TMM-to-test correlation problem, using Jacobian matrix formulation and Moore-Penrose pseudo-inverse, generalized to include several load cases. Aside, in simple cases, this method also allows for analytical solutions to be obtained, which helps to analyze some problems that appear when the Jacobian matrix is singular. To show the implementation of the method, two problems have been considered, one more academic, and the other one the TMM of an electronic box of PHI instrument of ESA Solar Orbiter mission, to be flown in 2019. The use of singular value decomposition of the Jacobian matrix to analyze and reduce these models is also shown. The error in parameter space is used to assess the quality of the correlation results in both models.

  19. Neural correlates of the difference between working memory speed and simple sensorimotor speed: an fMRI study.

    Directory of Open Access Journals (Sweden)

    Hikaru Takeuchi

    Full Text Available The difference between the speed of simple cognitive processes and the speed of complex cognitive processes has various psychological correlates. However, the neural correlates of this difference have not yet been investigated. In this study, we focused on working memory (WM for typical complex cognitive processes. Functional magnetic resonance imaging data were acquired during the performance of an N-back task, which is a measure of WM for typical complex cognitive processes. In our N-back task, task speed and memory load were varied to identify the neural correlates responsible for the difference between the speed of simple cognitive processes (estimated from the 0-back task and the speed of WM. Our findings showed that this difference was characterized by the increased activation in the right dorsolateral prefrontal cortex (DLPFC and the increased functional interaction between the right DLPFC and right superior parietal lobe. Furthermore, the local gray matter volume of the right DLPFC was correlated with participants' accuracy during fast WM tasks, which in turn correlated with a psychometric measure of participants' intelligence. Our findings indicate that the right DLPFC and its related network are responsible for the execution of the fast cognitive processes involved in WM. Identified neural bases may underlie the psychometric differences between the speed with which subjects perform simple cognitive tasks and the speed with which subjects perform more complex cognitive tasks, and explain the previous traditional psychological findings.

  20. Utilising temperature differences as constraints for estimating parameters in a simple climate model

    International Nuclear Information System (INIS)

    Bodman, Roger W; Karoly, David J; Enting, Ian G

    2010-01-01

    Simple climate models can be used to estimate the global temperature response to increasing greenhouse gases. Changes in the energy balance of the global climate system are represented by equations that necessitate the use of uncertain parameters. The values of these parameters can be estimated from historical observations, model testing, and tuning to more complex models. Efforts have been made at estimating the possible ranges for these parameters. This study continues this process, but demonstrates two new constraints. Previous studies have shown that land-ocean temperature differences are only weakly correlated with global mean temperature for natural internal climate variations. Hence, these temperature differences provide additional information that can be used to help constrain model parameters. In addition, an ocean heat content ratio can also provide a further constraint. A pulse response technique was used to identify relative parameter sensitivity which confirmed the importance of climate sensitivity and ocean vertical diffusivity, but the land-ocean warming ratio and the land-ocean heat exchange coefficient were also found to be important. Experiments demonstrate the utility of the land-ocean temperature difference and ocean heat content ratio for setting parameter values. This work is based on investigations with MAGICC (Model for the Assessment of Greenhouse-gas Induced Climate Change) as the simple climate model.

  1. An accurate and simple large signal model of HEMT

    DEFF Research Database (Denmark)

    Liu, Qing

    1989-01-01

    A large-signal model of discrete HEMTs (high-electron-mobility transistors) has been developed. It is simple and suitable for SPICE simulation of hybrid digital ICs. The model parameters are extracted by using computer programs and data provided by the manufacturer. Based on this model, a hybrid...

  2. A simple geometrical model describing shapes of soap films suspended on two rings

    Science.gov (United States)

    Herrmann, Felix J.; Kilvington, Charles D.; Wildenberg, Rebekah L.; Camacho, Franco E.; Walecki, Wojciech J.; Walecki, Peter S.; Walecki, Eve S.

    2016-09-01

    We measured and analysed the stability of two types of soap films suspended on two rings using the simple conical frusta-based model, where we use common definition of conical frustum as a portion of a cone that lies between two parallel planes cutting it. Using frusta-based we reproduced very well-known results for catenoid surfaces with and without a central disk. We present for the first time a simple conical frusta based spreadsheet model of the soap surface. This very simple, elementary, geometrical model produces results surprisingly well matching the experimental data and known exact analytical solutions. The experiment and the spreadsheet model can be used as a powerful teaching tool for pre-calculus and geometry students.

  3. A Simple Model of Global Aerosol Indirect Effects

    Science.gov (United States)

    Ghan, Steven J.; Smith, Steven J.; Wang, Minghuai; Zhang, Kai; Pringle, Kirsty; Carslaw, Kenneth; Pierce, Jeffrey; Bauer, Susanne; Adams, Peter

    2013-01-01

    Most estimates of the global mean indirect effect of anthropogenic aerosol on the Earth's energy balance are from simulations by global models of the aerosol lifecycle coupled with global models of clouds and the hydrologic cycle. Extremely simple models have been developed for integrated assessment models, but lack the flexibility to distinguish between primary and secondary sources of aerosol. Here a simple but more physically based model expresses the aerosol indirect effect (AIE) using analytic representations of cloud and aerosol distributions and processes. Although the simple model is able to produce estimates of AIEs that are comparable to those from some global aerosol models using the same global mean aerosol properties, the estimates by the simple model are sensitive to preindustrial cloud condensation nuclei concentration, preindustrial accumulation mode radius, width of the accumulation mode, size of primary particles, cloud thickness, primary and secondary anthropogenic emissions, the fraction of the secondary anthropogenic emissions that accumulates on the coarse mode, the fraction of the secondary mass that forms new particles, and the sensitivity of liquid water path to droplet number concentration. Estimates of present-day AIEs as low as 5 W/sq m and as high as 0.3 W/sq m are obtained for plausible sets of parameter values. Estimates are surprisingly linear in emissions. The estimates depend on parameter values in ways that are consistent with results from detailed global aerosol-climate simulation models, which adds to understanding of the dependence on AIE uncertainty on uncertainty in parameter values.

  4. A simple model for retrieving bare soil moisture from radar-scattering coefficients

    International Nuclear Information System (INIS)

    Chen, K.S.; Yen, S.K.; Huang, W.P.

    1995-01-01

    A simple algorithm based on a rough surface scattering model was developed to invert the bare soil moisture content from active microwave remote sensing data. In the algorithm development, a frequency mixing model was used to relate soil moisture to the dielectric constant. In particular, the Integral Equation Model (IEM) was used over a wide range of surface roughness and radar frequencies. To derive the algorithm, a sensitivity analysis was performed using a Monte Carlo simulation to study the effects of surface parameters, including height variance, correlation length, and dielectric constant. Because radar return is inherently dependent on both moisture content and surface roughness, the purpose of the sensitivity testing was to select the proper radar parameters so as to optimally decouple these two factors, in an attempt to minimize the effects of one while the other was observed. As a result, the optimal radar parameter ranges can be chosen for the purpose of soil moisture content inversion. One thousand samples were then generated with the IEM model followed by multivariate linear regression analysis to obtain an empirical soil moisture model. Numerical comparisons were made to illustrate the inversion performance using experimental measurements. Results indicate that the present algorithm is simple and accurate, and can be a useful tool for the remote sensing of bare soil surfaces. (author)

  5. Correlation mediated superconductivity in a 'High-Tsub(c)' model

    International Nuclear Information System (INIS)

    Long, M.W.

    1987-08-01

    A simple model is presented to account for the High-Tsub(c) perovskite superconductors. The superconducting mechanism is purely electronic and comes from local Hubbard correlations. The model comprises a Hubbard model for the copper sites with a single particle oxygen band between the two copper Hubbard bands. The electrons move only between nearest neighbour atoms which are of different types. Using two very different approximation schemes, one related to 'Slave-Boson' mean field theory and the other based on an exact local Fermion transformation, the possibility of copper-oxygen or a mixture of copper-oxygen and oxygen-oxygen pairing is shown. The author believes that the most promising situation for superconductivity is with the Oxygen band over half-filled and closer in energy to the lower Hubbard band. (author)

  6. Volatility and correlation-based systemic risk measures in the US market

    Science.gov (United States)

    Civitarese, Jamil

    2016-10-01

    This paper deals with the problem of how to use simple systemic risk measures to assess portfolio risk characteristics. Using three simple examples taken from previous literature, one based on raw and partial correlations, another based on the eigenvalue decomposition of the covariance matrix and the last one based on an eigenvalue entropy, a Granger-causation analysis revealed some of them are not always a good measure of risk in the S&P 500 and in the VIX. The measures selected do not Granger-cause the VIX index in all windows selected; therefore, in the sense of risk as volatility, the indicators are not always suitable. Nevertheless, their results towards returns are similar to previous works that accept them. A deeper analysis has shown that any symmetric measure based on eigenvalue decomposition of correlation matrices, however, is not useful as a measure of "correlation" risk. The empirical counterpart analysis of this proposition stated that negative correlations are usually small and, therefore, do not heavily distort the behavior of the indicator.

  7. Statistical analysis of strait time index and a simple model for trend and trend reversal

    Science.gov (United States)

    Chen, Kan; Jayaprakash, C.

    2003-06-01

    We analyze the daily closing prices of the Strait Time Index (STI) as well as the individual stocks traded in Singapore's stock market from 1988 to 2001. We find that the Hurst exponent is approximately 0.6 for both the STI and individual stocks, while the normal correlation functions show the random walk exponent of 0.5. We also investigate the conditional average of the price change in an interval of length T given the price change in the previous interval. We find strong correlations for price changes larger than a threshold value proportional to T; this indicates that there is no uniform crossover to Gaussian behavior. A simple model based on short-time trend and trend reversal is constructed. We show that the model exhibits statistical properties and market swings similar to those of the real market.

  8. A simple method to approximate liver size on cross-sectional images using living liver models

    International Nuclear Information System (INIS)

    Muggli, D.; Mueller, M.A.; Karlo, C.; Fornaro, J.; Marincek, B.; Frauenfelder, T.

    2009-01-01

    Aim: To assess whether a simple. diameter-based formula applicable to cross-sectional images can be used to calculate the total liver volume. Materials and methods: On 119 cross-sectional examinations (62 computed tomography and 57 magnetic resonance imaging) a simple, formula-based method to approximate the liver volume was evaluated. The total liver volume was approximated measuring the largest craniocaudal (cc), ventrodorsal (vd), and coronal (cor) diameters by two readers and implementing the equation: Vol estimated =ccxvdxcorx0.31. Inter-rater reliability, agreement, and correlation between liver volume calculation and virtual liver volumetry were analysed. Results: No significant disagreement between the two readers was found. The formula correlated significantly with the volumetric data (r > 0.85, p < 0.0001). In 81% of cases the error of the approximated volume was <10% and in 92% of cases <15% compared to the volumetric data. Conclusion: Total liver volume can be accurately estimated on cross-sectional images using a simple, diameter-based equation.

  9. HD CAG-correlated gene expression changes support a simple dominant gain of function

    Science.gov (United States)

    Jacobsen, Jessie C.; Gregory, Gillian C.; Woda, Juliana M.; Thompson, Morgan N.; Coser, Kathryn R.; Murthy, Vidya; Kohane, Isaac S.; Gusella, James F.; Seong, Ihn Sik; MacDonald, Marcy E.; Shioda, Toshi; Lee, Jong-Min

    2011-01-01

    Huntington's disease is initiated by the expression of a CAG repeat-encoded polyglutamine region in full-length huntingtin, with dominant effects that vary continuously with CAG size. The mechanism could involve a simple gain of function or a more complex gain of function coupled to a loss of function (e.g. dominant negative-graded loss of function). To distinguish these alternatives, we compared genome-wide gene expression changes correlated with CAG size across an allelic series of heterozygous CAG knock-in mouse embryonic stem (ES) cell lines (HdhQ20/7, HdhQ50/7, HdhQ91/7, HdhQ111/7), to genes differentially expressed between Hdhex4/5/ex4/5 huntingtin null and wild-type (HdhQ7/7) parental ES cells. The set of 73 genes whose expression varied continuously with CAG length had minimal overlap with the 754-member huntingtin-null gene set but the two were not completely unconnected. Rather, the 172 CAG length-correlated pathways and 238 huntingtin-null significant pathways clustered into 13 shared categories at the network level. A closer examination of the energy metabolism and the lipid/sterol/lipoprotein metabolism categories revealed that CAG length-correlated genes and huntingtin-null-altered genes either were different members of the same pathways or were in unique, but interconnected pathways. Thus, varying the polyglutamine size in full-length huntingtin produced gene expression changes that were distinct from, but related to, the effects of lack of huntingtin. These findings support a simple gain-of-function mechanism acting through a property of the full-length huntingtin protein and point to CAG-correlative approaches to discover its effects. Moreover, for therapeutic strategies based on huntingtin suppression, our data highlight processes that may be more sensitive to the disease trigger than to decreased huntingtin levels. PMID:21536587

  10. Elucidation of spin echo small angle neutron scattering correlation functions through model studies.

    Science.gov (United States)

    Shew, Chwen-Yang; Chen, Wei-Ren

    2012-02-14

    Several single-modal Debye correlation functions to approximate part of the overall Debey correlation function of liquids are closely examined for elucidating their behavior in the corresponding spin echo small angle neutron scattering (SESANS) correlation functions. We find that the maximum length scale of a Debye correlation function is identical to that of its SESANS correlation function. For discrete Debye correlation functions, the peak of SESANS correlation function emerges at their first discrete point, whereas for continuous Debye correlation functions with greater width, the peak position shifts to a greater value. In both cases, the intensity and shape of the peak of the SESANS correlation function are determined by the width of the Debye correlation functions. Furthermore, we mimic the intramolecular and intermolecular Debye correlation functions of liquids composed of interacting particles based on a simple model to elucidate their competition in the SESANS correlation function. Our calculations show that the first local minimum of a SESANS correlation function can be negative and positive. By adjusting the spatial distribution of the intermolecular Debye function in the model, the calculated SESANS spectra exhibit the profile consistent with that of hard-sphere and sticky-hard-sphere liquids predicted by more sophisticated liquid state theory and computer simulation. © 2012 American Institute of Physics

  11. What Is a Simple Liquid?

    Directory of Open Access Journals (Sweden)

    Trond S. Ingebrigtsen

    2012-03-01

    Full Text Available This paper is an attempt to identify the real essence of simplicity of liquids in John Locke’s understanding of the term. Simple liquids are traditionally defined as many-body systems of classical particles interacting via radially symmetric pair potentials. We suggest that a simple liquid should be defined instead by the property of having strong correlations between virial and potential-energy equilibrium fluctuations in the NVT ensemble. There is considerable overlap between the two definitions, but also some notable differences. For instance, in the new definition simplicity is not a direct property of the intermolecular potential because a liquid is usually only strongly correlating in part of its phase diagram. Moreover, not all simple liquids are atomic (i.e., with radially symmetric pair potentials and not all atomic liquids are simple. The main part of the paper motivates the new definition of liquid simplicity by presenting evidence that a liquid is strongly correlating if and only if its intermolecular interactions may be ignored beyond the first coordination shell (FCS. This is demonstrated by NVT simulations of the structure and dynamics of several atomic and three molecular model liquids with a shifted-forces cutoff placed at the first minimum of the radial distribution function. The liquids studied are inverse power-law systems (r^{-n} pair potentials with n=18,6,4, Lennard-Jones (LJ models (the standard LJ model, two generalized Kob-Andersen binary LJ mixtures, and the Wahnstrom binary LJ mixture, the Buckingham model, the Dzugutov model, the LJ Gaussian model, the Gaussian core model, the Hansen-McDonald molten salt model, the Lewis-Wahnstrom ortho-terphenyl model, the asymmetric dumbbell model, and the single-point charge water model. The final part of the paper summarizes properties of strongly correlating liquids, emphasizing that these are simpler than liquids in general. Simple liquids, as defined here, may be

  12. A simple rule based model for scheduling farm management operations in SWAT

    Science.gov (United States)

    Schürz, Christoph; Mehdi, Bano; Schulz, Karsten

    2016-04-01

    For many interdisciplinary questions at the watershed scale, the Soil and Water Assessment Tool (SWAT; Arnold et al., 1998) has become an accepted and widely used tool. Despite its flexibility, the model is highly demanding when it comes to input data. At SWAT's core the water balance and the modeled nutrient cycles are plant growth driven (implemented with the EPIC crop growth model). Therefore, land use and crop data with high spatial and thematic resolution, as well as detailed information on cultivation and farm management practices are required. For many applications of the model however, these data are unavailable. In order to meet these requirements, SWAT offers the option to trigger scheduled farm management operations by applying the Potential Heat Unit (PHU) concept. The PHU concept solely takes into account the accumulation of daily mean temperature for management scheduling. Hence, it contradicts several farming strategies that take place in reality; such as: i) Planting and harvesting dates are set much too early or too late, as the PHU concept is strongly sensitivity to inter-annual temperature fluctuations; ii) The timing of fertilizer application, in SWAT this often occurs simultaneously on the same date in in each field; iii) and can also coincide with precipitation events. Particularly, the latter two can lead to strong peaks in modeled nutrient loads. To cope with these shortcomings we propose a simple rule based model (RBM) to schedule management operations according to realistic farmer management practices in SWAT. The RBM involves simple strategies requiring only data that are input into the SWAT model initially, such as temperature and precipitation data. The user provides boundaries of time periods for operation schedules to take place for all crops in the model. These data are readily available from the literature or from crop variety trials. The RBM applies the dates by complying with the following rules: i) Operations scheduled in the

  13. Atmospheric greenhouse effect - simple model; Atmosfaerens drivhuseffekt - enkel modell

    Energy Technology Data Exchange (ETDEWEB)

    Kanestroem, Ingolf; Henriksen, Thormod

    2011-07-01

    The article shows a simple model for the atmospheric greenhouse effect based on consideration of both the sun and earth as 'black bodies', so that the physical laws that apply to them, may be used. Furthermore, explained why some gases are greenhouse gases, but other gases in the atmosphere has no greenhouse effect. But first, some important concepts and physical laws encountered in the article, are repeated. (AG)

  14. A Simple theoretical model for 63Ni betavoltaic battery

    International Nuclear Information System (INIS)

    ZUO, Guoping; ZHOU, Jianliang; KE, Guotu

    2013-01-01

    A numerical simulation of the energy deposition distribution in semiconductors is performed for 63 Ni beta particles. Results show that the energy deposition distribution exhibits an approximate exponential decay law. A simple theoretical model is developed for 63 Ni betavoltaic battery based on the distribution characteristics. The correctness of the model is validated by two literature experiments. Results show that the theoretical short-circuit current agrees well with the experimental results, and the open-circuit voltage deviates from the experimental results in terms of the influence of the PN junction defects and the simplification of the source. The theoretical model can be applied to 63 Ni and 147 Pm betavoltaic batteries. - Highlights: • The energy deposition distribution is found following an approximate exponential decay law when beta particles emitted from 63 Ni pass through a semiconductor. • A simple theoretical model for 63 Ni betavoltaic battery is constructed based on the exponential decay law. • Theoretical model can be applied to the betavoltaic batteries which radioactive source has a similar energy spectrum with 63 Ni, such as 147 Pm

  15. Simple spherical ablative-implosion model

    International Nuclear Information System (INIS)

    Mayer, F.J.; Steele, J.T.; Larsen, J.T.

    1980-01-01

    A simple model of the ablative implosion of a high-aspect-ratio (shell radius to shell thickness ratio) spherical shell is described. The model is similar in spirit to Rosenbluth's snowplow model. The scaling of the implosion time was determined in terms of the ablation pressure and the shell parameters such as diameter, wall thickness, and shell density, and compared these to complete hydrodynamic code calculations. The energy transfer efficiency from ablation pressure to shell implosion kinetic energy was examined and found to be very efficient. It may be possible to attach a simple heat-transport calculation to our implosion model to describe the laser-driven ablation-implosion process. The model may be useful for determining other energy driven (e.g., ion beam) implosion scaling

  16. Specific heat of the simple-cubic Ising model

    NARCIS (Netherlands)

    Feng, X.; Blöte, H.W.J.

    2010-01-01

    We provide an expression quantitatively describing the specific heat of the Ising model on the simple-cubic lattice in the critical region. This expression is based on finite-size scaling of numerical results obtained by means of a Monte Carlo method. It agrees satisfactorily with series expansions

  17. A Simple Model of Wings in Heavy-Ion Collisions

    CERN Document Server

    Parikh, Aditya

    2015-01-01

    We create a simple model of heavy ion collisions independent of any generators as a way of investigating a possible source of the wings seen in data. As a first test, we reproduce a standard correlations plot to verify the integrity of the model. We then proceed to test whether an η dependent v2 could be a source of the wings and take projections along multiple Δφ intervals and compare with data. Other variations of the model are tested by having dN/dφ and v2 depend on η as well as including pions and protons into the model to make it more realistic. Comparisons with data seem to indicate that an η dependent v2 is not the main source of the wings.

  18. Simple membrane-based model of the Min oscillator

    International Nuclear Information System (INIS)

    Petrášek, Zdeněk; Schwille, Petra

    2015-01-01

    Min proteins in E. coli bacteria organize into a dynamic pattern oscillating between the two cell poles. This process identifies the middle of the cell and enables symmetric cell division. In an experimental model system consisting of a flat membrane with effectively infinite supply of proteins and energy source, the Min proteins assemble into travelling waves. Here we propose a simple one-dimensional model of the Min dynamics that, unlike the existing models, reproduces the sharp decrease of Min concentration when the majority of protein detaches from the membrane, and even the narrow MinE maximum immediately preceding the detachment. The proposed model thus provides a possible mechanism for the formation of the MinE ring known from cells. The model is restricted to one dimension, with protein interactions described by chemical kinetics allowing at most bimolecular reactions, and explicitly considering only three, membrane-bound, species. The bulk solution above the membrane is approximated as being well-mixed, with constant concentrations of all species. Unlike other models, our proposal does not require autocatalytic binding of MinD to the membrane. Instead, it is assumed that two MinE molecules are necessary to induce the dissociation of the MinD dimer and its subsequent detachment from the membrane. We investigate which reaction schemes lead to unstable homogeneous steady states and limit cycle oscillations, and how diffusion affects their stability. The suggested model qualitatively describes the shape of the Min waves observed on flat membranes, and agrees with the experimental dependence of the wave period on the MinE concentration. These results highlight the importance of MinE presence on the membrane without being bound to MinD, and of the reactions of Min proteins on the membrane. (paper)

  19. A simple model based magnet sorting algorithm for planar hybrid undulators

    International Nuclear Information System (INIS)

    Rakowsky, G.

    2010-01-01

    Various magnet sorting strategies have been used to optimize undulator performance, ranging from intuitive pairing of high- and low-strength magnets, to full 3D FEM simulation with 3-axis Helmholtz coil magnet data. In the extreme, swapping magnets in a full field model to minimize trajectory wander and rms phase error can be time consuming. This paper presents a simpler approach, extending the field error signature concept to obtain trajectory displacement, kick angle and phase error signatures for each component of magnetization error from a Radia model of a short hybrid-PM undulator. We demonstrate that steering errors and phase errors are essentially decoupled and scalable from measured X, Y and Z components of magnetization. Then, for any given sequence of magnets, rms trajectory and phase errors are obtained from simple cumulative sums of the scaled displacements and phase errors. The cost function (a weighted sum of these errors) is then minimized by swapping magnets, using one's favorite optimization algorithm. This approach was applied recently at NSLS to a short in-vacuum undulator, which required no subsequent trajectory or phase shimming. Trajectory and phase signatures are also obtained for some mechanical errors, to guide 'virtual shimming' and specifying mechanical tolerances. Some simple inhomogeneities are modeled to assess their error contributions.

  20. Applying 3-PG, a simple process-based model designed to produce practical results, to data from loblolly pine experiments

    Science.gov (United States)

    Joe J. Landsberg; Kurt H. Johnsen; Timothy J. Albaugh; H. Lee Allen; Steven E. McKeand

    2001-01-01

    3-PG is a simple process-based model that requires few parameter values and only readily available input data. We tested the structure of the model by calibrating it against loblolly pine data from the control treatment of the SETRES experiment in Scotland County, NC, then altered the fertility rating to simulate the effects of fertilization. There was excellent...

  1. A 'simple' hybrid model for power derivatives

    International Nuclear Information System (INIS)

    Lyle, Matthew R.; Elliott, Robert J.

    2009-01-01

    This paper presents a method for valuing power derivatives using a supply-demand approach. Our method extends work in the field by incorporating randomness into the base load portion of the supply stack function and equating it with a noisy demand process. We obtain closed form solutions for European option prices written on average spot prices considering two different supply models: a mean-reverting model and a Markov chain model. The results are extensions of the classic Black-Scholes equation. The model provides a relatively simple approach to describe the complicated price behaviour observed in electricity spot markets and also allows for computationally efficient derivatives pricing. (author)

  2. Transverse momentum correlations of quarks in recursive jet models

    Science.gov (United States)

    Artru, X.; Belghobsi, Z.; Redouane-Salah, E.

    2016-08-01

    In the symmetric string fragmentation recipe adopted by PYTHIA for jet simulations, the transverse momenta of successive quarks are uncorrelated. This is a simplification but has no theoretical basis. Transverse momentum correlations are naturally expected, for instance, in a covariant multiperipheral model of quark hadronization. We propose a simple recipe of string fragmentation which leads to such correlations. The definition of the jet axis and its relation with the primordial transverse momentum of the quark is also discussed.

  3. Simulation of green roof runoff under different substrate depths and vegetation covers by coupling a simple conceptual and a physically based hydrological model.

    Science.gov (United States)

    Soulis, Konstantinos X; Valiantzas, John D; Ntoulas, Nikolaos; Kargas, George; Nektarios, Panayiotis A

    2017-09-15

    In spite of the well-known green roof benefits, their widespread adoption in the management practices of urban drainage systems requires the use of adequate analytical and modelling tools. In the current study, green roof runoff modeling was accomplished by developing, testing, and jointly using a simple conceptual model and a physically based numerical simulation model utilizing HYDRUS-1D software. The use of such an approach combines the advantages of the conceptual model, namely simplicity, low computational requirements, and ability to be easily integrated in decision support tools with the capacity of the physically based simulation model to be easily transferred in conditions and locations other than those used for calibrating and validating it. The proposed approach was evaluated with an experimental dataset that included various green roof covers (either succulent plants - Sedum sediforme, or xerophytic plants - Origanum onites, or bare substrate without any vegetation) and two substrate depths (either 8 cm or 16 cm). Both the physically based and the conceptual models matched very closely the observed hydrographs. In general, the conceptual model performed better than the physically based simulation model but the overall performance of both models was sufficient in most cases as it is revealed by the Nash-Sutcliffe Efficiency index which was generally greater than 0.70. Finally, it was showcased how a physically based and a simple conceptual model can be jointly used to allow the use of the simple conceptual model for a wider set of conditions than the available experimental data and in order to support green roof design. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. The Effect of Error Correlation on Interfactor Correlation in Psychometric Measurement

    Science.gov (United States)

    Westfall, Peter H.; Henning, Kevin S. S.; Howell, Roy D.

    2012-01-01

    This article shows how interfactor correlation is affected by error correlations. Theoretical and practical justifications for error correlations are given, and a new equivalence class of models is presented to explain the relationship between interfactor correlation and error correlations. The class allows simple, parsimonious modeling of error…

  5. Comparison between the SIMPLE and ENERGY mixing models

    International Nuclear Information System (INIS)

    Burns, K.J.; Todreas, N.E.

    1980-07-01

    The SIMPLE and ENERGY mixing models were compared in order to investigate the limitations of SIMPLE's analytically formulated mixing parameter, relative to the experimentally calibrated ENERGY mixing parameters. For interior subchannels, it was shown that when the SIMPLE and ENERGY parameters are reduced to a common form, there is good agreement between the two models for a typical fuel geometry. However, large discrepancies exist for typical blanket (lower P/D) geometries. Furthermore, the discrepancies between the mixing parameters result in significant differences in terms of the temperature profiles generated by the ENERGY code utilizing these mixing parameters as input. For edge subchannels, the assumptions made in the development of the SIMPLE model were extended to the rectangular edge subchannel geometry used in ENERGY. The resulting effective eddy diffusivities (used by the ENERGY code) associated with the SIMPLE model are again closest to those of the ENERGY model for the fuel assembly geometry. Finally, the SIMPLE model's neglect of a net swirl effect in the edge region is most limiting for assemblies exhibiting relatively large radial power skews

  6. A Subpath-based Logit Model to Capture the Correlation of Routes

    Directory of Open Access Journals (Sweden)

    Xinjun Lai

    2016-06-01

    Full Text Available A subpath-based methodology is proposed to capture the travellers’ route choice behaviours and their perceptual correlation of routes, because the original link-based style may not be suitable in application: (1 travellers do not process road network information and construct the chosen route by a link-by-link style; (2 observations from questionnaires and GPS data, however, are not always link-specific. Subpaths are defined as important portions of the route, such as major roads and landmarks. The cross-nested Logit (CNL structure is used for its tractable closed-form and its capability to explicitly capture the routes correlation. Nests represent subpaths other than links so that the number of nests is significantly reduced. Moreover, the proposed method simplifies the original link-based CNL model; therefore, it alleviates the estimation and computation difficulties. The estimation and forecast validation with real data are presented, and the results suggest that the new method is practical.

  7. Surface Transient Binding-Based Fluorescence Correlation Spectroscopy (STB-FCS), a Simple and Easy-to-Implement Method to Extend the Upper Limit of the Time Window to Seconds.

    Science.gov (United States)

    Peng, Sijia; Wang, Wenjuan; Chen, Chunlai

    2018-05-10

    Fluorescence correlation spectroscopy is a powerful single-molecule tool that is able to capture kinetic processes occurring at the nanosecond time scale. However, the upper limit of its time window is restricted by the dwell time of the molecule of interest in the confocal detection volume, which is usually around submilliseconds for a freely diffusing biomolecule. Here, we present a simple and easy-to-implement method, named surface transient binding-based fluorescence correlation spectroscopy (STB-FCS), which extends the upper limit of the time window to seconds. We further demonstrated that STB-FCS enables capture of both intramolecular and intermolecular kinetic processes whose time scales cross several orders of magnitude.

  8. A simple model for binary star evolution

    International Nuclear Information System (INIS)

    Whyte, C.A.; Eggleton, P.P.

    1985-01-01

    A simple model for calculating the evolution of binary stars is presented. Detailed stellar evolution calculations of stars undergoing mass and energy transfer at various rates are reported and used to identify the dominant physical processes which determine the type of evolution. These detailed calculations are used to calibrate the simple model and a comparison of calculations using the detailed stellar evolution equations and the simple model is made. Results of the evolution of a few binary systems are reported and compared with previously published calculations using normal stellar evolution programs. (author)

  9. Simple models of equilibrium and nonequilibrium phenomena

    International Nuclear Information System (INIS)

    Lebowitz, J.L.

    1987-01-01

    This volume consists of two chapters of particular interest to researchers in the field of statistical mechanics. The first chapter is based on the premise that the best way to understand the qualitative properties that characterize many-body (i.e. macroscopic) systems is to study 'a number of the more significant model systems which, at least in principle are susceptible of complete analysis'. The second chapter deals exclusively with nonequilibrium phenomena. It reviews the theory of fluctuations in open systems to which they have made important contributions. Simple but interesting model examples are emphasised

  10. Compressed Sensing with Linear Correlation Between Signal and Measurement Noise

    DEFF Research Database (Denmark)

    Arildsen, Thomas; Larsen, Torben

    2014-01-01

    reconstruction algorithms, but is not known in existing literature. The proposed technique reduces reconstruction error considerably in the case of linearly correlated measurements and noise. Numerical experiments confirm the efficacy of the technique. The technique is demonstrated with application to low......Existing convex relaxation-based approaches to reconstruction in compressed sensing assume that noise in the measurements is independent of the signal of interest. We consider the case of noise being linearly correlated with the signal and introduce a simple technique for improving compressed...... sensing reconstruction from such measurements. The technique is based on a linear model of the correlation of additive noise with the signal. The modification of the reconstruction algorithm based on this model is very simple and has negligible additional computational cost compared to standard...

  11. Linking Simple Economic Theory Models and the Cointegrated Vector AutoRegressive Model

    DEFF Research Database (Denmark)

    Møller, Niels Framroze

    This paper attempts to clarify the connection between simple economic theory models and the approach of the Cointegrated Vector-Auto-Regressive model (CVAR). By considering (stylized) examples of simple static equilibrium models, it is illustrated in detail, how the theoretical model and its stru....... Further fundamental extensions and advances to more sophisticated theory models, such as those related to dynamics and expectations (in the structural relations) are left for future papers......This paper attempts to clarify the connection between simple economic theory models and the approach of the Cointegrated Vector-Auto-Regressive model (CVAR). By considering (stylized) examples of simple static equilibrium models, it is illustrated in detail, how the theoretical model and its......, it is demonstrated how other controversial hypotheses such as Rational Expectations can be formulated directly as restrictions on the CVAR-parameters. A simple example of a "Neoclassical synthetic" AS-AD model is also formulated. Finally, the partial- general equilibrium distinction is related to the CVAR as well...

  12. Modeling reproductive decisions with simple heuristics

    Directory of Open Access Journals (Sweden)

    Peter Todd

    2013-10-01

    Full Text Available BACKGROUND Many of the reproductive decisions that humans make happen without much planning or forethought, arising instead through the use of simple choice rules or heuristics that involve relatively little information and processing. Nonetheless, these heuristic-guided decisions are typically beneficial, owing to humans' ecological rationality - the evolved fit between our constrained decision mechanisms and the adaptive problems we face. OBJECTIVE This paper reviews research on the ecological rationality of human decision making in the domain of reproduction, showing how fertility-related decisions are commonly made using various simple heuristics matched to the structure of the environment in which they are applied, rather than being made with information-hungry mechanisms based on optimization or rational economic choice. METHODS First, heuristics for sequential mate search are covered; these heuristics determine when to stop the process of mate search by deciding that a good-enough mate who is also mutually interested has been found, using a process of aspiration-level setting and assessing. These models are tested via computer simulation and comparison to demographic age-at-first-marriage data. Next, a heuristic process of feature-based mate comparison and choice is discussed, in which mate choices are determined by a simple process of feature-matching with relaxing standards over time. Parental investment heuristics used to divide resources among offspring are summarized. Finally, methods for testing the use of such mate choice heuristics in a specific population over time are then described.

  13. [Correlation coefficient-based classification method of hydrological dependence variability: With auto-regression model as example].

    Science.gov (United States)

    Zhao, Yu Xi; Xie, Ping; Sang, Yan Fang; Wu, Zi Yi

    2018-04-01

    Hydrological process evaluation is temporal dependent. Hydrological time series including dependence components do not meet the data consistency assumption for hydrological computation. Both of those factors cause great difficulty for water researches. Given the existence of hydrological dependence variability, we proposed a correlationcoefficient-based method for significance evaluation of hydrological dependence based on auto-regression model. By calculating the correlation coefficient between the original series and its dependence component and selecting reasonable thresholds of correlation coefficient, this method divided significance degree of dependence into no variability, weak variability, mid variability, strong variability, and drastic variability. By deducing the relationship between correlation coefficient and auto-correlation coefficient in each order of series, we found that the correlation coefficient was mainly determined by the magnitude of auto-correlation coefficient from the 1 order to p order, which clarified the theoretical basis of this method. With the first-order and second-order auto-regression models as examples, the reasonability of the deduced formula was verified through Monte-Carlo experiments to classify the relationship between correlation coefficient and auto-correlation coefficient. This method was used to analyze three observed hydrological time series. The results indicated the coexistence of stochastic and dependence characteristics in hydrological process.

  14. The curvature calculation mechanism based on simple cell model.

    Science.gov (United States)

    Yu, Haiyang; Fan, Xingyu; Song, Aiqi

    2017-07-20

    A conclusion has not yet been reached on how exactly the human visual system detects curvature. This paper demonstrates how orientation-selective simple cells can be used to construct curvature-detecting neural units. Through fixed arrangements, multiple plurality cells were constructed to simulate curvature cells with a proportional output to their curvature. In addition, this paper offers a solution to the problem of narrow detection range under fixed resolution by selecting an output value under multiple resolution. Curvature cells can be treated as concrete models of an end-stopped mechanism, and they can be used to further understand "curvature-selective" characteristics and to explain basic psychophysical findings and perceptual phenomena in current studies.

  15. Chiral and color-superconducting phase transitions with vector interaction in a simple model

    International Nuclear Information System (INIS)

    Kitazawa, Masakiyo; Koide, Tomoi; Kunihiro, Teiji; Nemoto, Yukio

    2002-01-01

    We investigate effects of the vector interaction on chiral and color superconducting (CSC) phase transitions at finite density and temperature in a simple Nambu-Jona-Lasinio model. It is shown that the repulsive density-density interaction coming from the vector term, which is present in the effective chiral models but has been omitted, enhances the competition between the chiral symmetry breaking (χSB) and CSC phase transition, and thereby makes the thermodynamic potential have a shallow minimum over a wide range of values of the correlated chiral and CSC order parameters. We find that when the vector coupling is increased, the first order transition between the χSB and CSC phases becomes weaker, and the coexisting phase in which both the chiral and color-gauge symmetry are dynamically broken comes to exist over a wider range of the density and temperature. We also show that there can exist two endpoints, which are tricritical points in the chiral limit, along the critical line of the first order transition in some range of values of the vector coupling. Although our analysis is based on a simple model, the nontrivial interplay between the χSB and CSC phases induced by the vector interaction is expected to be a universal phenomenon and might give a clue to understanding results obtained with two-color QCD on the lattice. (author)

  16. Combining Correlation-Based and Reward-Based Learning in Neural Control for Policy Improvement

    DEFF Research Database (Denmark)

    Manoonpong, Poramate; Kolodziejski, Christoph; Wörgötter, Florentin

    2013-01-01

    Classical conditioning (conventionally modeled as correlation-based learning) and operant conditioning (conventionally modeled as reinforcement learning or reward-based learning) have been found in biological systems. Evidence shows that these two mechanisms strongly involve learning about...... associations. Based on these biological findings, we propose a new learning model to achieve successful control policies for artificial systems. This model combines correlation-based learning using input correlation learning (ICO learning) and reward-based learning using continuous actor–critic reinforcement...... learning (RL), thereby working as a dual learner system. The model performance is evaluated by simulations of a cart-pole system as a dynamic motion control problem and a mobile robot system as a goal-directed behavior control problem. Results show that the model can strongly improve pole balancing control...

  17. Simple models with ALICE fluxes

    CERN Document Server

    Striet, J

    2000-01-01

    We introduce two simple models which feature an Alice electrodynamics phase. In a well defined sense the Alice flux solutions we obtain in these models obey first order equations similar to those of the Nielsen-Olesen fluxtube in the abelian higgs model in the Bogomol'nyi limit. Some numerical solutions are presented as well.

  18. A model for the two-point velocity correlation function in turbulent channel flow

    International Nuclear Information System (INIS)

    Sahay, A.; Sreenivasan, K.R.

    1996-01-01

    A relatively simple analytical expression is presented to approximate the equal-time, two-point, double-velocity correlation function in turbulent channel flow. To assess the accuracy of the model, we perform the spectral decomposition of the integral operator having the model correlation function as its kernel. Comparisons of the empirical eigenvalues and eigenfunctions with those constructed from direct numerical simulations data show good agreement. copyright 1996 American Institute of Physics

  19. Simulation of a directed random-walk model: the effect of pseudo-random-number correlations

    OpenAIRE

    Shchur, L. N.; Heringa, J. R.; Blöte, H. W. J.

    1996-01-01

    We investigate the mechanism that leads to systematic deviations in cluster Monte Carlo simulations when correlated pseudo-random numbers are used. We present a simple model, which enables an analysis of the effects due to correlations in several types of pseudo-random-number sequences. This model provides qualitative understanding of the bias mechanism in a class of cluster Monte Carlo algorithms.

  20. Exploring performance and power properties of modern multicore chips via simple machine models

    OpenAIRE

    Hager, Georg; Treibig, Jan; Habich, Johannes; Wellein, Gerhard

    2012-01-01

    Modern multicore chips show complex behavior with respect to performance and power. Starting with the Intel Sandy Bridge processor, it has become possible to directly measure the power dissipation of a CPU chip and correlate this data with the performance properties of the running code. Going beyond a simple bottleneck analysis, we employ the recently published Execution-Cache-Memory (ECM) model to describe the single- and multi-core performance of streaming kernels. The model refines the wel...

  1. Reconstructing Organophosphorus Pesticide Doses Using the Reversed Dosimetry Approach in a Simple Physiologically-Based Pharmacokinetic Model

    Directory of Open Access Journals (Sweden)

    Chensheng Lu

    2012-01-01

    Full Text Available We illustrated the development of a simple pharmacokinetic (SPK model aiming to estimate the absorbed chlorpyrifos doses using urinary biomarker data, 3,5,6-trichlorpyridinol as the model input. The effectiveness of the SPK model in the pesticide risk assessment was evaluated by comparing dose estimates using different urinary composite data. The dose estimates resulting from the first morning voids appeared to be lower than but not significantly different to those using before bedtime, lunch or dinner voids. We found similar trend for dose estimates using three different urinary composite data. However, the dose estimates using the SPK model for individual children were significantly higher than those from the conventional physiologically based pharmacokinetic (PBPK modeling using aggregate environmental measurements of chlorpyrifos as the model inputs. The use of urinary data in the SPK model intuitively provided a plausible alternative to the conventional PBPK model in reconstructing the absorbed chlorpyrifos dose.

  2. A phenomenological model for the structure-composition relationship of the high Tc cuprates based on simple chemical principles

    International Nuclear Information System (INIS)

    Alarco, J.A.; Talbot, P.C.

    2012-01-01

    A simple phenomenological model for the relationship between structure and composition of the high Tc cuprates is presented. The model is based on two simple crystal chemistry principles: unit cell doping and charge balance within unit cells. These principles are inspired by key experimental observations of how the materials accommodate large deviations from stoichiometry. Consistent explanations for significant HTSC properties can be explained without any additional assumptions while retaining valuable insight for geometric interpretation. Combining these two chemical principles with a review of Crystal Field Theory (CFT) or Ligand Field Theory (LFT), it becomes clear that the two oxidation states in the conduction planes (typically d 8 and d 9 ) belong to the most strongly divergent d-levels as a function of deformation from regular octahedral coordination. This observation offers a link to a range of coupling effects relating vibrations and spin waves through application of Hund’s rules. An indication of this model’s capacity to predict physical properties for HTSC is provided and will be elaborated in subsequent publications. Simple criteria for the relationship between structure and composition in HTSC systems may guide chemical syntheses within new material systems.

  3. Simple Models for Model-based Portfolio Load Balancing Controller Synthesis

    DEFF Research Database (Denmark)

    Edlund, Kristian Skjoldborg; Mølbak, Tommy; Bendtsen, Jan Dimon

    2010-01-01

    of generation units existing in an electrical power supply network, for instance in model-based predictive control or declarative control schemes. We focus on the effectuators found in the Danish power system. In particular, the paper presents models for boiler load, district heating, condensate throttling...

  4. A model-based radiography restoration method based on simple scatter-degradation scheme for improving image visibility

    Science.gov (United States)

    Kim, K.; Kang, S.; Cho, H.; Kang, W.; Seo, C.; Park, C.; Lee, D.; Lim, H.; Lee, H.; Kim, G.; Park, S.; Park, J.; Kim, W.; Jeon, D.; Woo, T.; Oh, J.

    2018-02-01

    In conventional planar radiography, image visibility is often limited mainly due to the superimposition of the object structure under investigation and the artifacts caused by scattered x-rays and noise. Several methods, including computed tomography (CT) as a multiplanar imaging modality, air-gap and grid techniques for the reduction of scatters, phase-contrast imaging as another image-contrast modality, etc., have extensively been investigated in attempt to overcome these difficulties. However, those methods typically require higher x-ray doses or special equipment. In this work, as another approach, we propose a new model-based radiography restoration method based on simple scatter-degradation scheme where the intensity of scattered x-rays and the transmission function of a given object are estimated from a single x-ray image to restore the original degraded image. We implemented the proposed algorithm and performed an experiment to demonstrate its viability. Our results indicate that the degradation of image characteristics by scattered x-rays and noise was effectively recovered by using the proposed method, which improves the image visibility in radiography considerably.

  5. Crises and Collective Socio-Economic Phenomena: Simple Models and Challenges

    Science.gov (United States)

    Bouchaud, Jean-Philippe

    2013-05-01

    Financial and economic history is strewn with bubbles and crashes, booms and busts, crises and upheavals of all sorts. Understanding the origin of these events is arguably one of the most important problems in economic theory. In this paper, we review recent efforts to include heterogeneities and interactions in models of decision. We argue that the so-called Random Field Ising model ( rfim) provides a unifying framework to account for many collective socio-economic phenomena that lead to sudden ruptures and crises. We discuss different models that can capture potentially destabilizing self-referential feedback loops, induced either by herding, i.e. reference to peers, or trending, i.e. reference to the past, and that account for some of the phenomenology missing in the standard models. We discuss some empirically testable predictions of these models, for example robust signatures of rfim-like herding effects, or the logarithmic decay of spatial correlations of voting patterns. One of the most striking result, inspired by statistical physics methods, is that Adam Smith's invisible hand can fail badly at solving simple coordination problems. We also insist on the issue of time-scales, that can be extremely long in some cases, and prevent socially optimal equilibria from being reached. As a theoretical challenge, the study of so-called "detailed-balance" violating decision rules is needed to decide whether conclusions based on current models (that all assume detailed-balance) are indeed robust and generic.

  6. Observations and models of simple nocturnal slope flows

    International Nuclear Information System (INIS)

    Doran, J.C.; Horst, J.W.

    1983-01-01

    Measurements of simple nocturnal slope winds were taken on Rattlesnake Mountain, a nearly ideal two-dimensional ridge. Tower and tethered balloon instrumentation allowed the determination of the wind and temperature characteristics of the katabatic layer as well as the ambient conditions. Two cases were chosen for study; these were marked by well-defined surface-based temperature inversions and a low-level maximum in the downslope wind component. The downslope development of the slope flow could be determined from the tower measurements, and showed a progressive strenghtening of the katabatic layer. Hydraulic models developed by Manins and Sawford (1979a) and Briggs (1981) gave useful estimates of drainage layer depths, but were not otherwise applicable. A simple numerical model that relates the eddy diffusivity to the local turbulent kinetic energy was found to give good agreement with the observed wind and temperature profiles of the slope flows

  7. Full waveform inversion using envelope-based global correlation norm

    Science.gov (United States)

    Oh, Ju-Won; Alkhalifah, Tariq

    2018-05-01

    To increase the feasibility of full waveform inversion on real data, we suggest a new objective function, which is defined as the global correlation of the envelopes of modelled and observed data. The envelope-based global correlation norm has the advantage of the envelope inversion that generates artificial low-frequency information, which provides the possibility to recover long-wavelength structure in an early stage. In addition, the envelope-based global correlation norm maintains the advantage of the global correlation norm, which reduces the sensitivity of the misfit to amplitude errors so that the performance of inversion on real data can be enhanced when the exact source wavelet is not available and more complex physics are ignored. Through the synthetic example for 2-D SEG/EAGE overthrust model with inaccurate source wavelet, we compare the performance of four different approaches, which are the least-squares waveform inversion, least-squares envelope inversion, global correlation norm and envelope-based global correlation norm. Finally, we apply the envelope-based global correlation norm on the 3-D Ocean Bottom Cable (OBC) data from the North Sea. The envelope-based global correlation norm captures the strong reflections from the high-velocity caprock and generates artificial low-frequency reflection energy that helps us recover long-wavelength structure of the model domain in the early stages. From this long-wavelength model, the conventional global correlation norm is sequentially applied to invert for higher-resolution features of the model.

  8. A simple three-dimensional macroscopic root water uptake model based on the hydraulic architecture approach

    Directory of Open Access Journals (Sweden)

    V. Couvreur

    2012-08-01

    Full Text Available Many hydrological models including root water uptake (RWU do not consider the dimension of root system hydraulic architecture (HA because explicitly solving water flow in such a complex system is too time consuming. However, they might lack process understanding when basing RWU and plant water stress predictions on functions of variables such as the root length density distribution. On the basis of analytical solutions of water flow in a simple HA, we developed an "implicit" model of the root system HA for simulation of RWU distribution (sink term of Richards' equation and plant water stress in three-dimensional soil water flow models. The new model has three macroscopic parameters defined at the soil element scale, or at the plant scale, rather than for each segment of the root system architecture: the standard sink fraction distribution SSF, the root system equivalent conductance Krs and the compensatory RWU conductance Kcomp. It clearly decouples the process of water stress from compensatory RWU, and its structure is appropriate for hydraulic lift simulation. As compared to a model explicitly solving water flow in a realistic maize root system HA, the implicit model showed to be accurate for predicting RWU distribution and plant collar water potential, with one single set of parameters, in dissimilar water dynamics scenarios. For these scenarios, the computing time of the implicit model was a factor 28 to 214 shorter than that of the explicit one. We also provide a new expression for the effective soil water potential sensed by plants in soils with a heterogeneous water potential distribution, which emerged from the implicit model equations. With the proposed implicit model of the root system HA, new concepts are brought which open avenues towards simple and mechanistic RWU models and water stress functions operational for field scale water dynamics simulation.

  9. Accounting for correlated observations in an age-based state-space stock assessment model

    DEFF Research Database (Denmark)

    Berg, Casper Willestofte; Nielsen, Anders

    2016-01-01

    Fish stock assessment models often relyon size- or age-specific observations that are assumed to be statistically independent of each other. In reality, these observations are not raw observations, but rather they are estimates from a catch-standardization model or similar summary statistics base...... the independence assumption is rejected. Less fluctuating estimates of the fishing mortality is obtained due to a reduced process error. The improved model does not suffer from correlated residuals unlike the independent model, and the variance of forecasts is decreased....

  10. H3+-WZNW correlators from Liouville theory

    International Nuclear Information System (INIS)

    Ribault, Sylvain; Teschner, Joerg

    2005-01-01

    We prove that arbitrary correlation functions of the H 3 + -WZNW model on a sphere have a simple expression in terms of Liouville theory correlation functions. This is based on the correspondence between the KZ and BPZ equations, and on relations between the structure constants of Liouville theory and the H 3 + -WZNW model. In the critical level limit, these results imply a direct link between eigenvectors of the Gaudin hamiltonians and the problem of uniformization of Riemann surfaces. We also present an expression for correlation functions of the SL(2)/U(1) gauged WZNW model in terms of correlation functions in Liouville theory

  11. A simple approach to quantitative analysis using three-dimensional spectra based on selected Zernike moments.

    Science.gov (United States)

    Zhai, Hong Lin; Zhai, Yue Yuan; Li, Pei Zhen; Tian, Yue Li

    2013-01-21

    A very simple approach to quantitative analysis is proposed based on the technology of digital image processing using three-dimensional (3D) spectra obtained by high-performance liquid chromatography coupled with a diode array detector (HPLC-DAD). As the region-based shape features of a grayscale image, Zernike moments with inherently invariance property were employed to establish the linear quantitative models. This approach was applied to the quantitative analysis of three compounds in mixed samples using 3D HPLC-DAD spectra, and three linear models were obtained, respectively. The correlation coefficients (R(2)) for training and test sets were more than 0.999, and the statistical parameters and strict validation supported the reliability of established models. The analytical results suggest that the Zernike moment selected by stepwise regression can be used in the quantitative analysis of target compounds. Our study provides a new idea for quantitative analysis using 3D spectra, which can be extended to the analysis of other 3D spectra obtained by different methods or instruments.

  12. Modeling conditional correlations of asset returns

    DEFF Research Database (Denmark)

    Silvennoinen, Annastiina; Teräsvirta, Timo

    2015-01-01

    In this paper we propose a new multivariate GARCH model with time-varying conditional correlation structure. The time-varying conditional correlations change smoothly between two extreme states of constant correlations according to a predetermined or exogenous transition variable. An LM-test is d......In this paper we propose a new multivariate GARCH model with time-varying conditional correlation structure. The time-varying conditional correlations change smoothly between two extreme states of constant correlations according to a predetermined or exogenous transition variable. An LM......-test is derived to test the constancy of correlations and LM- and Wald tests to test the hypothesis of partially constant correlations. Analytical expressions for the test statistics and the required derivatives are provided to make computations feasible. An empirical example based on daily return series of five...

  13. A Simple Physics-Based Model Predicts Oil Production from Thousands of Horizontal Wells in Shales

    KAUST Repository

    Patzek, Tadeusz; Saputra, Wardana; Kirati, Wissem

    2017-01-01

    and ultimate recovery in shale wells. Here we introduce a simple model of producing oil and solution gas from the horizontal hydrofractured wells. This model is consistent with the basic physics and geometry of the extraction process. We then apply our model

  14. A simple statistical model for geomagnetic reversals

    Science.gov (United States)

    Constable, Catherine

    1990-01-01

    The diversity of paleomagnetic records of geomagnetic reversals now available indicate that the field configuration during transitions cannot be adequately described by simple zonal or standing field models. A new model described here is based on statistical properties inferred from the present field and is capable of simulating field transitions like those observed. Some insight is obtained into what one can hope to learn from paleomagnetic records. In particular, it is crucial that the effects of smoothing in the remanence acquisition process be separated from true geomagnetic field behavior. This might enable us to determine the time constants associated with the dominant field configuration during a reversal.

  15. Foreshock and aftershocks in simple earthquake models.

    Science.gov (United States)

    Kazemian, J; Tiampo, K F; Klein, W; Dominguez, R

    2015-02-27

    Many models of earthquake faults have been introduced that connect Gutenberg-Richter (GR) scaling to triggering processes. However, natural earthquake fault systems are composed of a variety of different geometries and materials and the associated heterogeneity in physical properties can cause a variety of spatial and temporal behaviors. This raises the question of how the triggering process and the structure interact to produce the observed phenomena. Here we present a simple earthquake fault model based on the Olami-Feder-Christensen and Rundle-Jackson-Brown cellular automata models with long-range interactions that incorporates a fixed percentage of stronger sites, or asperity cells, into the lattice. These asperity cells are significantly stronger than the surrounding lattice sites but eventually rupture when the applied stress reaches their higher threshold stress. The introduction of these spatial heterogeneities results in temporal clustering in the model that mimics that seen in natural fault systems along with GR scaling. In addition, we observe sequences of activity that start with a gradually accelerating number of larger events (foreshocks) prior to a main shock that is followed by a tail of decreasing activity (aftershocks). This work provides further evidence that the spatial and temporal patterns observed in natural seismicity are strongly influenced by the underlying physical properties and are not solely the result of a simple cascade mechanism.

  16. Correlation Models for Temperature Fields

    KAUST Repository

    North, Gerald R.

    2011-05-16

    This paper presents derivations of some analytical forms for spatial correlations of evolving random fields governed by a white-noise-driven damped diffusion equation that is the analog of autoregressive order 1 in time and autoregressive order 2 in space. The study considers the two-dimensional plane and the surface of a sphere, both of which have been studied before, but here time is introduced to the problem. Such models have a finite characteristic length (roughly the separation at which the autocorrelation falls to 1/e) and a relaxation time scale. In particular, the characteristic length of a particular temporal Fourier component of the field increases to a finite value as the frequency of the particular component decreases. Some near-analytical formulas are provided for the results. A potential application is to the correlation structure of surface temperature fields and to the estimation of large area averages, depending on how the original datastream is filtered into a distribution of Fourier frequencies (e.g., moving average, low pass, or narrow band). The form of the governing equation is just that of the simple energy balance climate models, which have a long history in climate studies. The physical motivation provided by the derivation from a climate model provides some heuristic appeal to the approach and suggests extensions of the work to nonuniform cases.

  17. Correlation Models for Temperature Fields

    KAUST Repository

    North, Gerald R.; Wang, Jue; Genton, Marc G.

    2011-01-01

    This paper presents derivations of some analytical forms for spatial correlations of evolving random fields governed by a white-noise-driven damped diffusion equation that is the analog of autoregressive order 1 in time and autoregressive order 2 in space. The study considers the two-dimensional plane and the surface of a sphere, both of which have been studied before, but here time is introduced to the problem. Such models have a finite characteristic length (roughly the separation at which the autocorrelation falls to 1/e) and a relaxation time scale. In particular, the characteristic length of a particular temporal Fourier component of the field increases to a finite value as the frequency of the particular component decreases. Some near-analytical formulas are provided for the results. A potential application is to the correlation structure of surface temperature fields and to the estimation of large area averages, depending on how the original datastream is filtered into a distribution of Fourier frequencies (e.g., moving average, low pass, or narrow band). The form of the governing equation is just that of the simple energy balance climate models, which have a long history in climate studies. The physical motivation provided by the derivation from a climate model provides some heuristic appeal to the approach and suggests extensions of the work to nonuniform cases.

  18. Selection of an appropriately simple storm runoff model

    Directory of Open Access Journals (Sweden)

    A. I. J. M. van Dijk

    2010-03-01

    Full Text Available An appropriately simple event runoff model for catchment hydrological studies was derived. The model was selected from several variants as having the optimum balance between simplicity and the ability to explain daily observations of streamflow from 260 Australian catchments (23–1902 km2. Event rainfall and runoff were estimated from the observations through a combination of baseflow separation and storm flow recession analysis, producing a storm flow recession coefficient (kQF. Various model structures with up to six free parameters were investigated, covering most of the equations applied in existing lumped catchment models. The performance of alternative structures and free parameters were expressed in Aikake's Final Prediction Error Criterion (FPEC and corresponding Nash-Sutcliffe model efficiencies (NSME for event runoff totals. For each model variant, the number of free parameters was reduced in steps based on calculated parameter sensitivity. The resulting optimal model structure had two or three free parameters; the first describing the non-linear relationship between event rainfall and runoff (Smax, the second relating runoff to antecedent groundwater storage (CSg, and a third that described initial rainfall losses (Li, but which could be set at 8 mm without affecting model performance too much. The best three parameter model produced a median NSME of 0.64 and outperformed, for example, the Soil Conservation Service Curve Number technique (median NSME 0.30–0.41. Parameter estimation in ungauged catchments is likely to be challenging: 64% of the variance in kQF among stations could be explained by catchment climate indicators and spatial correlation, but corresponding numbers were a modest 45% for CSg, 21% for Smax and none for Li, respectively. In gauged catchments, better

  19. Early warning model based on correlated networks in global crude oil markets

    Science.gov (United States)

    Yu, Jia-Wei; Xie, Wen-Jie; Jiang, Zhi-Qiang

    2018-01-01

    Applying network tools on predicting and warning the systemic risks provides a novel avenue to manage risks in financial markets. Here, we construct a series of global crude oil correlated networks based on the historical 57 oil prices covering a period from 1993 to 2012. Two systemic risk indicators are constructed based on the density and modularity of correlated networks. The local maximums of the risk indicators are found to have the ability to predict the trends of oil prices. In our sample periods, the indicator based on the network density sends five signals and the indicator based on the modularity index sends four signals. The four signals sent by both indicators are able to warn the drop of future oil prices and the signal only sent by the network density is followed by a huge rise of oil prices. Our results deepen the application of network measures on building early warning models of systemic risks and can be applied to predict the trends of future prices in financial markets.

  20. Simple implementation of general dark energy models

    International Nuclear Information System (INIS)

    Bloomfield, Jolyon K.; Pearson, Jonathan A.

    2014-01-01

    We present a formalism for the numerical implementation of general theories of dark energy, combining the computational simplicity of the equation of state for perturbations approach with the generality of the effective field theory approach. An effective fluid description is employed, based on a general action describing single-scalar field models. The formalism is developed from first principles, and constructed keeping the goal of a simple implementation into CAMB in mind. Benefits of this approach include its straightforward implementation, the generality of the underlying theory, the fact that the evolved variables are physical quantities, and that model-independent phenomenological descriptions may be straightforwardly investigated. We hope this formulation will provide a powerful tool for the comparison of theoretical models of dark energy with observational data

  1. Copula-based modeling of degree-correlated networks

    International Nuclear Information System (INIS)

    Raschke, Mathias; Schläpfer, Markus; Trantopoulos, Konstantinos

    2014-01-01

    Dynamical processes on complex networks such as information exchange, innovation diffusion, cascades in financial networks or epidemic spreading are highly affected by their underlying topologies as characterized by, for instance, degree–degree correlations. Here, we introduce the concept of copulas in order to generate random networks with an arbitrary degree distribution and a rich a priori degree–degree correlation (or ‘association’) structure. The accuracy of the proposed formalism and corresponding algorithm is numerically confirmed, while the method is tested on a real-world network of yeast protein–protein interactions. The derived network ensembles can be systematically deployed as proper null models, in order to unfold the complex interplay between the topology of real-world networks and the dynamics on top of them. (paper)

  2. A simple model of bedform migration

    DEFF Research Database (Denmark)

    Bartholdy, Jesper; Ernstsen, Verner Brandbyge; Flemming, Burg W

    2010-01-01

    and width) of naturally-packed bed material on the bedform lee side, qb(crest). The model is simple, built on a rational description of simplified sediment mechanics, and its calibration constant can be explained in accordance with estimated values of the physical constants on which it is based. Predicted......A model linking subaqueous dune migration to the effective (grain related) shear stress is calibrated by means of flume data for bedform dimensions and migration rates. The effective shear stress is calculated on the basis of a new method assuming a near-bed layer above the mean bed level in which...... the current velocity accelerates towards the bedform crest. As a consequence, the effective bed shear stress corresponds to the shear stress acting directly on top of the bedform. The model operates with the critical Shields stress as a function of grain size, and predicts the deposition (volume per unit time...

  3. Correlation-based Transition Modeling for External Aerodynamic Flows

    Science.gov (United States)

    Medida, Shivaji

    Conventional turbulence models calibrated for fully turbulent boundary layers often over-predict drag and heat transfer on aerodynamic surfaces with partially laminar boundary layers. A robust correlation-based model is developed for use in Reynolds-Averaged Navier-Stokes simulations to predict laminar-to-turbulent transition onset of boundary layers on external aerodynamic surfaces. The new model is derived from an existing transition model for the two-equation k-omega Shear Stress Transport (SST) turbulence model, and is coupled with the one-equation Spalart-Allmaras (SA) turbulence model. The transition model solves two transport equations for intermittency and transition momentum thickness Reynolds number. Experimental correlations and local mean flow quantities are used in the model to account for effects of freestream turbulence level and pressure gradients on transition onset location. Transition onset is triggered by activating intermittency production using a vorticity Reynolds number criterion. In the new model, production and destruction terms of the intermittency equation are modified to improve consistency in the fully turbulent boundary layer post-transition onset, as well as ensure insensitivity to freestream eddy viscosity value specified in the SA model. In the original model, intermittency was used to control production and destruction of turbulent kinetic energy. Whereas, in the new model, only the production of eddy viscosity in SA model is controlled, and the destruction term is not altered. Unlike the original model, the new model does not use an additional correction to intermittency for separation-induced transition. Accuracy of drag predictions are improved significantly with the use of the transition model for several two-dimensional single- and multi-element airfoil cases over a wide range of Reynolds numbers. The new model is able to predict the formation of stable and long laminar separation bubbles on low-Reynolds number airfoils that

  4. Chiral correlators in Landau-Ginsburg theories and N=2 superconformal models

    International Nuclear Information System (INIS)

    Howe, P.S.; West, P.C.

    1989-01-01

    Chiral correlation functions are computed in N=2 Landau-Ginsburg models using the ε-expansion and the superconformal Ward identities for the Landau-Ginsburg effective action. They are also computed directly using superconformal model techniques. The same results are obtained yielding further confirmation of the identification of superconformal minimal models with Landau-Ginsburg models evaluated at their fixed points. The formulae for the chiral commutators that we compute are extremely simple when expressed in terms of effective actions. (orig.)

  5. Mean spherical model for hard ions and dipoles: Thermodynamics and correlation functions

    International Nuclear Information System (INIS)

    Vericat, F.; Blum, L.

    1980-01-01

    The solution of the mean spherical model of a mixture of equal-size hard ions and dipoles is reinvestigated. Simple expressions for the coefficients of the Laplace transform of the pair correlation function and the other thermodynamic properties are given

  6. Effects of correlated parameters and uncertainty in electronic-structure-based chemical kinetic modelling

    Science.gov (United States)

    Sutton, Jonathan E.; Guo, Wei; Katsoulakis, Markos A.; Vlachos, Dionisios G.

    2016-04-01

    Kinetic models based on first principles are becoming common place in heterogeneous catalysis because of their ability to interpret experimental data, identify the rate-controlling step, guide experiments and predict novel materials. To overcome the tremendous computational cost of estimating parameters of complex networks on metal catalysts, approximate quantum mechanical calculations are employed that render models potentially inaccurate. Here, by introducing correlative global sensitivity analysis and uncertainty quantification, we show that neglecting correlations in the energies of species and reactions can lead to an incorrect identification of influential parameters and key reaction intermediates and reactions. We rationalize why models often underpredict reaction rates and show that, despite the uncertainty being large, the method can, in conjunction with experimental data, identify influential missing reaction pathways and provide insights into the catalyst active site and the kinetic reliability of a model. The method is demonstrated in ethanol steam reforming for hydrogen production for fuel cells.

  7. Simple model of inhibition of chain-branching combustion processes

    Science.gov (United States)

    Babushok, Valeri I.; Gubernov, Vladimir V.; Minaev, Sergei S.; Miroshnichenko, Taisia P.

    2017-11-01

    A simple kinetic model has been suggested to describe the inhibition and extinction of flame propagation in reaction systems with chain-branching reactions typical for hydrocarbon systems. The model is based on the generalised model of the combustion process with chain-branching reaction combined with the one-stage reaction describing the thermal mode of flame propagation with the addition of inhibition reaction steps. Inhibitor addition suppresses the radical overshoot in flame and leads to the change of reaction mode from the chain-branching reaction to a thermal mode of flame propagation. With the increase of inhibitor the transition of chain-branching mode of reaction to the reaction with straight-chains (non-branching chain reaction) is observed. The inhibition part of the model includes a block of three reactions to describe the influence of the inhibitor. The heat losses are incorporated into the model via Newton cooling. The flame extinction is the result of the decreased heat release of inhibited reaction processes and the suppression of radical overshoot with the further decrease of the reaction rate due to the temperature decrease and mixture dilution. A comparison of the results of modelling laminar premixed methane/air flames inhibited by potassium bicarbonate (gas phase model, detailed kinetic model) with the results obtained using the suggested simple model is presented. The calculations with the detailed kinetic model demonstrate the following modes of combustion process: (1) flame propagation with chain-branching reaction (with radical overshoot, inhibitor addition decreases the radical overshoot down to the equilibrium level); (2) saturation of chemical influence of inhibitor, and (3) transition to thermal mode of flame propagation (non-branching chain mode of reaction). The suggested simple kinetic model qualitatively reproduces the modes of flame propagation with the addition of the inhibitor observed using detailed kinetic models.

  8. Real-Time Corrected Traffic Correlation Model for Traffic Flow Forecasting

    Directory of Open Access Journals (Sweden)

    Hua-pu Lu

    2015-01-01

    Full Text Available This paper focuses on the problems of short-term traffic flow forecasting. The main goal is to put forward traffic correlation model and real-time correction algorithm for traffic flow forecasting. Traffic correlation model is established based on the temporal-spatial-historical correlation characteristic of traffic big data. In order to simplify the traffic correlation model, this paper presents correction coefficients optimization algorithm. Considering multistate characteristic of traffic big data, a dynamic part is added to traffic correlation model. Real-time correction algorithm based on Fuzzy Neural Network is presented to overcome the nonlinear mapping problems. A case study based on a real-world road network in Beijing, China, is implemented to test the efficiency and applicability of the proposed modeling methods.

  9. Improving stability of prediction models based on correlated omics data by using network approaches.

    Directory of Open Access Journals (Sweden)

    Renaud Tissier

    Full Text Available Building prediction models based on complex omics datasets such as transcriptomics, proteomics, metabolomics remains a challenge in bioinformatics and biostatistics. Regularized regression techniques are typically used to deal with the high dimensionality of these datasets. However, due to the presence of correlation in the datasets, it is difficult to select the best model and application of these methods yields unstable results. We propose a novel strategy for model selection where the obtained models also perform well in terms of overall predictability. Several three step approaches are considered, where the steps are 1 network construction, 2 clustering to empirically derive modules or pathways, and 3 building a prediction model incorporating the information on the modules. For the first step, we use weighted correlation networks and Gaussian graphical modelling. Identification of groups of features is performed by hierarchical clustering. The grouping information is included in the prediction model by using group-based variable selection or group-specific penalization. We compare the performance of our new approaches with standard regularized regression via simulations. Based on these results we provide recommendations for selecting a strategy for building a prediction model given the specific goal of the analysis and the sizes of the datasets. Finally we illustrate the advantages of our approach by application of the methodology to two problems, namely prediction of body mass index in the DIetary, Lifestyle, and Genetic determinants of Obesity and Metabolic syndrome study (DILGOM and prediction of response of each breast cancer cell line to treatment with specific drugs using a breast cancer cell lines pharmacogenomics dataset.

  10. A new interpretation and validation of variance based importance measures for models with correlated inputs

    Science.gov (United States)

    Hao, Wenrui; Lu, Zhenzhou; Li, Luyi

    2013-05-01

    In order to explore the contributions by correlated input variables to the variance of the output, a novel interpretation framework of importance measure indices is proposed for a model with correlated inputs, which includes the indices of the total correlated contribution and the total uncorrelated contribution. The proposed indices accurately describe the connotations of the contributions by the correlated input to the variance of output, and they can be viewed as the complement and correction of the interpretation about the contributions by the correlated inputs presented in "Estimation of global sensitivity indices for models with dependent variables, Computer Physics Communications, 183 (2012) 937-946". Both of them contain the independent contribution by an individual input. Taking the general form of quadratic polynomial as an illustration, the total correlated contribution and the independent contribution by an individual input are derived analytically, from which the components and their origins of both contributions of correlated input can be clarified without any ambiguity. In the special case that no square term is included in the quadratic polynomial model, the total correlated contribution by the input can be further decomposed into the variance contribution related to the correlation of the input with other inputs and the independent contribution by the input itself, and the total uncorrelated contribution can be further decomposed into the independent part by interaction between the input and others and the independent part by the input itself. Numerical examples are employed and their results demonstrate that the derived analytical expressions of the variance-based importance measure are correct, and the clarification of the correlated input contribution to model output by the analytical derivation is very important for expanding the theory and solutions of uncorrelated input to those of the correlated one.

  11. Measurement and correlation of antifungal drugs solubility in pure supercritical CO{sub 2} using semiempirical models

    Energy Technology Data Exchange (ETDEWEB)

    Yamini, Yadollah, E-mail: yyamini@modares.ac.ir [Department of Chemistry, Faculty of Sciences, Tarbiat Modares University, P.O. Box 14115-175, Tehran (Iran, Islamic Republic of); Moradi, Morteza [Department of Chemistry, Faculty of Sciences, Tarbiat Modares University, P.O. Box 14115-175, Tehran (Iran, Islamic Republic of)

    2011-07-15

    Highlights: > Ketoconazole (KZ) and clotrimazole (CZ) are two antifungal drugs. > The solubilities of KZ and CZ were measured in supercritical CO{sub 2}. > The experimental results were correlated using five density based models. > The heats' of drug-CO{sub 2} solvation and drug vaporization were estimated. - Abstract: In the present study the solubilities of two antifungal drugs of ketoconazole and clotrimazole in supercritical carbon dioxide were measured using a simple static method. The experimental data were measured at (308 to 348) K, over the pressure range of (12.2 to 35.5) MPa. The mole fraction solubilities ranged from 0.2 . 10{sup -6} to 17.45 . 10{sup -5}. In this study five density based models were used to calculate the solubility of drugs in supercritical carbon dioxide. The density based models are Chrastil, modified Chrastil, Bartle, modified Bartle and Mendez-Santiago and Teja (M-T). Interaction parameters for the studied models were obtained and the percentage of average absolute relative deviation (AARD%) in each calculation was displayed. The correlation results showed good agreement with the experimental data. A comparison among the five models revealed that the Bartle and its modified models gave much better correlations of the solubility data with an average absolute relative deviation (AARD%) ranging from 4.8% to 6.2% and from 4.5% to 6.3% for ketoconazole and clotrimazole, respectively. Using the correlation results, the heat of drug-CO{sub 2} solvation and that of drug vaporization was separately approximated in the range of (-22.1 to -26.4 and 88.3 to 125.9) kJ . mol{sup -1}.

  12. Simple Models for the Dynamic Modeling of Rotating Tires

    Directory of Open Access Journals (Sweden)

    J.C. Delamotte

    2008-01-01

    Full Text Available Large Finite Element (FE models of tires are currently used to predict low frequency behavior and to obtain dynamic model coefficients used in multi-body models for riding and comfort. However, to predict higher frequency behavior, which may explain irregular wear, critical rotating speeds and noise radiation, FE models are not practical. Detailed FE models are not adequate for optimization and uncertainty predictions either, as in such applications the dynamic solution must be computed a number of times. Therefore, there is a need for simpler models that can capture the physics of the tire and be used to compute the dynamic response with a low computational cost. In this paper, the spectral (or continuous element approach is used to derive such a model. A circular beam spectral element that takes into account the string effect is derived, and a method to simulate the response to a rotating force is implemented in the frequency domain. The behavior of a circular ring under different internal pressures is investigated using modal and frequency/wavenumber representations. Experimental results obtained with a real untreaded truck tire are presented and qualitatively compared with the simple model predictions with good agreement. No attempt is made to obtain equivalent parameters for the simple model from the real tire results. On the other hand, the simple model fails to represent the correct variation of the quotient of the natural frequency by the number of circumferential wavelengths with the mode count. Nevertheless, some important features of the real tire dynamic behavior, such as the generation of standing waves and part of the frequency/wavenumber behavior, can be investigated using the proposed simplified model.

  13. A simple model for yield prediction of rice based on vegetation index derived from satellite and AMeDAS data during ripening period

    International Nuclear Information System (INIS)

    Wakiyama, Y.; Inoue, K.; Nakazono, K.

    2003-01-01

    The present study was conducted to show a simple model for rice yield predicting by using a vegetation index (NDVI) derived from satellite and meteorological data. In a field experiment, the relationship between the vegetation index and radiation absorbed by the rice canopy was investigated from transplanting to maturity. Their correlation held. This result revealed that the vegetation index could be used as a measure of absorptance of solar radiation by rice canopy. NDVI multiplied by solar radiation (SR) every day was accumulated (Σ(SR·NDVI)) from the field experiment. Σ(SR·NDVI) was plotted against above ground dry matter. It was obvious that they had a strong relationship. Rice yield largely depends on solar radiation and air temperature during the ripening period. Air temperature affects dry matter production. Relationships between Y SR -1 (Y: rice yield, SR: solar radiation) and mean air temperature were investigated from meteorological data and statistical data on rice yield. There was an optimum air temperature, 21.3°C, for ripening. When it was near 21.3°C in the ripening period, the rice yield was higher. We proposed a simple model for yield prediction of rice based on these results. The model is composed with SR·NDVI and the optimum air temperature. Vegetation index was derived from 3 years, LANDSAT TM data in Toyama, Ishikawa, Fukui and Nagano prefectures at heading. The meteorological data was used from AMeDAS data. The model was described as follows: Y = 0.728 SR·NDVI−2.04(T−21.3) 2 + 282 (r 2 = 0.65, n = 43) where Y is rice yield (kg 10a -1 ), SR is solar radiation (MJ m -2 ) during the ripening period (from 10 days before heading to 30 days after heading), T is mean air temperature (°C) during the ripening period. RMSE was 33.7kg 10a -1 . The model revealed good precision. (author)

  14. Unified theory for stochastic modelling of hydroclimatic processes: Preserving marginal distributions, correlation structures, and intermittency

    Science.gov (United States)

    Papalexiou, Simon Michael

    2018-05-01

    Hydroclimatic processes come in all "shapes and sizes". They are characterized by different spatiotemporal correlation structures and probability distributions that can be continuous, mixed-type, discrete or even binary. Simulating such processes by reproducing precisely their marginal distribution and linear correlation structure, including features like intermittency, can greatly improve hydrological analysis and design. Traditionally, modelling schemes are case specific and typically attempt to preserve few statistical moments providing inadequate and potentially risky distribution approximations. Here, a single framework is proposed that unifies, extends, and improves a general-purpose modelling strategy, based on the assumption that any process can emerge by transforming a specific "parent" Gaussian process. A novel mathematical representation of this scheme, introducing parametric correlation transformation functions, enables straightforward estimation of the parent-Gaussian process yielding the target process after the marginal back transformation, while it provides a general description that supersedes previous specific parameterizations, offering a simple, fast and efficient simulation procedure for every stationary process at any spatiotemporal scale. This framework, also applicable for cyclostationary and multivariate modelling, is augmented with flexible parametric correlation structures that parsimoniously describe observed correlations. Real-world simulations of various hydroclimatic processes with different correlation structures and marginals, such as precipitation, river discharge, wind speed, humidity, extreme events per year, etc., as well as a multivariate example, highlight the flexibility, advantages, and complete generality of the method.

  15. Memory-Based Simple Heuristics as Attribute Substitution: Competitive Tests of Binary Choice Inference Models

    Science.gov (United States)

    Honda, Hidehito; Matsuka, Toshihiko; Ueda, Kazuhiro

    2017-01-01

    Some researchers on binary choice inference have argued that people make inferences based on simple heuristics, such as recognition, fluency, or familiarity. Others have argued that people make inferences based on available knowledge. To examine the boundary between heuristic and knowledge usage, we examine binary choice inference processes in…

  16. Validation of the replica trick for simple models

    Science.gov (United States)

    Shinzato, Takashi

    2018-04-01

    We discuss the replica analytic continuation using several simple models in order to prove mathematically the validity of the replica analysis, which is used in a wide range of fields related to large-scale complex systems. While replica analysis consists of two analytical techniques—the replica trick (or replica analytic continuation) and the thermodynamical limit (and/or order parameter expansion)—we focus our study on replica analytic continuation, which is the mathematical basis of the replica trick. We apply replica analysis to solve a variety of analytical models, and examine the properties of replica analytic continuation. Based on the positive results for these models we propose that replica analytic continuation is a robust procedure in replica analysis.

  17. A Study of Simple Diffraction Models

    DEFF Research Database (Denmark)

    Agerkvist, Finn

    In this paper two simple methods for cabinet edge diffraction are examined. Calculations with both models are compared with more sophisticated theoretical models and with measured data. The parameters involved are studied and their importance for normal loudspeaker box designs is examined....

  18. A Simple Probabilistic Combat Model

    Science.gov (United States)

    2016-06-13

    Government may violate any copyrights that exist in this work. This page intentionally left blank. ABSTRACT The Lanchester ...page intentionally left blank. TABLE OF CONTENTS Page No.Abstract iii List of Illustrations vii 1. INTRODUCTION 1 2. DETERMINISTIC LANCHESTER MODEL...This page intentionally left blank. 1. INTRODUCTION The Lanchester combat model1 is a simple way to assess the effects of quantity and quality

  19. Process correlation analysis model for process improvement identification.

    Science.gov (United States)

    Choi, Su-jin; Kim, Dae-Kyoo; Park, Sooyong

    2014-01-01

    Software process improvement aims at improving the development process of software systems. It is initiated by process assessment identifying strengths and weaknesses and based on the findings, improvement plans are developed. In general, a process reference model (e.g., CMMI) is used throughout the process of software process improvement as the base. CMMI defines a set of process areas involved in software development and what to be carried out in process areas in terms of goals and practices. Process areas and their elements (goals and practices) are often correlated due to the iterative nature of software development process. However, in the current practice, correlations of process elements are often overlooked in the development of an improvement plan, which diminishes the efficiency of the plan. This is mainly attributed to significant efforts and the lack of required expertise. In this paper, we present a process correlation analysis model that helps identify correlations of process elements from the results of process assessment. This model is defined based on CMMI and empirical data of improvement practices. We evaluate the model using industrial data.

  20. A Simple Model of Self-Assessments

    NARCIS (Netherlands)

    S. Dominguez Martinez (Silvia); O.H. Swank (Otto)

    2006-01-01

    textabstractWe develop a simple model that describes individuals' self-assessments of their abilities. We assume that individuals learn about their abilities from appraisals of others and experience. Our model predicts that if communication is imperfect, then (i) appraisals of others tend to be too

  1. A prediction model for spontaneous regression of cervical intraepithelial neoplasia grade 2, based on simple clinical parameters.

    Science.gov (United States)

    Koeneman, Margot M; van Lint, Freyja H M; van Kuijk, Sander M J; Smits, Luc J M; Kooreman, Loes F S; Kruitwagen, Roy F P M; Kruse, Arnold J

    2017-01-01

    This study aims to develop a prediction model for spontaneous regression of cervical intraepithelial neoplasia grade 2 (CIN 2) lesions based on simple clinicopathological parameters. The study was conducted at Maastricht University Medical Center, the Netherlands. The prediction model was developed in a retrospective cohort of 129 women with a histologic diagnosis of CIN 2 who were managed by watchful waiting for 6 to 24months. Five potential predictors for spontaneous regression were selected based on the literature and expert opinion and were analyzed in a multivariable logistic regression model, followed by backward stepwise deletion based on the Wald test. The prediction model was internally validated by the bootstrapping method. Discriminative capacity and accuracy were tested by assessing the area under the receiver operating characteristic curve (AUC) and a calibration plot. Disease regression within 24months was seen in 91 (71%) of 129 patients. A prediction model was developed including the following variables: smoking, Papanicolaou test outcome before the CIN 2 diagnosis, concomitant CIN 1 diagnosis in the same biopsy, and more than 1 biopsy containing CIN 2. Not smoking, Papanicolaou class predictive of disease regression. The AUC was 69.2% (95% confidence interval, 58.5%-79.9%), indicating a moderate discriminative ability of the model. The calibration plot indicated good calibration of the predicted probabilities. This prediction model for spontaneous regression of CIN 2 may aid physicians in the personalized management of these lesions. Copyright © 2016 Elsevier Inc. All rights reserved.

  2. A simple model of self-assessment

    NARCIS (Netherlands)

    Dominguez-Martinez, S.; Swank, O.H.

    2009-01-01

    We develop a simple model that describes individuals' self-assessments of their abilities. We assume that individuals learn about their abilities from appraisals of others and experience. Our model predicts that if communication is imperfect, then (i) appraisals of others tend to be too positive and

  3. Intruder states in the cadmium isotopes and a simple schematic calculation

    International Nuclear Information System (INIS)

    Aprahamian, A.; Brenner, D.S.; Casten, R.F.; Heyde, K.

    1984-01-01

    Angular correlation studies of 118 120 Cd at TRISTAN have allowed the discovery and identification in each nucleus of two new 0 + states and their respective E2 decay properties. The results have been interpreted in terms of a simple schematic model based on the mixing of normal vibration-like and intruder rotation-like 2p-4h configurations

  4. A simple tentative model of the losses caused by the Senegalese grasshopper, Oedaleus senegalensis (Krauss 1877) to millet in the Sahel

    DEFF Research Database (Denmark)

    Bal, Amadou Bocar; Ouambama, Zakaria; Dieng, Ibnou

    2015-01-01

    Oedaleus senegalensis is a serious pest of millet in the Sahel, but the correlation to crop loss remains largely unknown. Therefore, the correlation between densities of O. senegalensis, defoliation level and millet yields was investigated in Niger in 2008, and a simple model to foresee the yield...

  5. Simple Model for Simulating Characteristics of River Flow Velocity in Large Scale

    Directory of Open Access Journals (Sweden)

    Husin Alatas

    2015-01-01

    Full Text Available We propose a simple computer based phenomenological model to simulate the characteristics of river flow velocity in large scale. We use shuttle radar tomography mission based digital elevation model in grid form to define the terrain of catchment area. The model relies on mass-momentum conservation law and modified equation of motion of falling body in inclined plane. We assume inelastic collision occurs at every junction of two river branches to describe the dynamics of merged flow velocity.

  6. A modeling paradigm for interdisciplinary water resources modeling: Simple Script Wrappers (SSW)

    Science.gov (United States)

    Steward, David R.; Bulatewicz, Tom; Aistrup, Joseph A.; Andresen, Daniel; Bernard, Eric A.; Kulcsar, Laszlo; Peterson, Jeffrey M.; Staggenborg, Scott A.; Welch, Stephen M.

    2014-05-01

    Holistic understanding of a water resources system requires tools capable of model integration. This team has developed an adaptation of the OpenMI (Open Modelling Interface) that allows easy interactions across the data passed between models. Capabilities have been developed to allow programs written in common languages such as matlab, python and scilab to share their data with other programs and accept other program's data. We call this interface the Simple Script Wrapper (SSW). An implementation of SSW is shown that integrates groundwater, economic, and agricultural models in the High Plains region of Kansas. Output from these models illustrates the interdisciplinary discovery facilitated through use of SSW implemented models. Reference: Bulatewicz, T., A. Allen, J.M. Peterson, S. Staggenborg, S.M. Welch, and D.R. Steward, The Simple Script Wrapper for OpenMI: Enabling interdisciplinary modeling studies, Environmental Modelling & Software, 39, 283-294, 2013. http://dx.doi.org/10.1016/j.envsoft.2012.07.006 http://code.google.com/p/simple-script-wrapper/

  7. A simple model for cell type recognition using 2D-correlation analysis of FTIR images from breast cancer tissue

    Science.gov (United States)

    Ali, Mohamed H.; Rakib, Fazle; Al-Saad, Khalid; Al-Saady, Rafif; Lyng, Fiona M.; Goormaghtigh, Erik

    2018-07-01

    Breast cancer is the second most common cancer after lung cancer. So far, in clinical practice, most cancer parameters originating from histopathology rely on the visualization by a pathologist of microscopic structures observed in stained tissue sections, including immunohistochemistry markers. Fourier transform infrared spectroscopy (FTIR) spectroscopy provides a biochemical fingerprint of a biopsy sample and, together with advanced data analysis techniques, can accurately classify cell types. Yet, one of the challenges when dealing with FTIR imaging is the slow recording of the data. One cm2 tissue section requires several hours of image recording. We show in the present paper that 2D covariance analysis singles out only a few wavenumbers where both variance and covariance are large. Simple models could be built using 4 wavenumbers to identify the 4 main cell types present in breast cancer tissue sections. Decision trees provide particularly simple models to reach discrimination between the 4 cell types. The robustness of these simple decision-tree models were challenged with FTIR spectral data obtained using different recording conditions. One test set was recorded by transflection on tissue sections in the presence of paraffin while the training set was obtained on dewaxed tissue sections by transmission. Furthermore, the test set was collected with a different brand of FTIR microscope and a different pixel size. Despite the different recording conditions, separating extracellular matrix (ECM) from carcinoma spectra was 100% successful, underlying the robustness of this univariate model and the utility of covariance analysis for revealing efficient wavenumbers. We suggest that 2D covariance maps using the full spectral range could be most useful to select the interesting wavenumbers and achieve very fast data acquisition on quantum cascade laser infrared imaging microscopes.

  8. Simple interphase drag model for numerical two-fluid modeling of two-phase flow systems

    International Nuclear Information System (INIS)

    Chow, H.; Ransom, V.H.

    1984-01-01

    The interphase drag model that has been developed for RELAP5/MOD2 is based on a simple formulation having flow regime maps for both horizontal and vertical flows. The model is based on a conventional semi-empirical formulation that includes the product of drag coefficient, interfacial area, and relative dynamic pressure. The interphase drag model is implemented in the RELAP5/MOD2 light water reactor transient analysis code and has been used to simulate a variety of separate effects experiments to assess the model accuracy. The results from three of these simulations, the General Electric Company small vessel blowdown experiment, Dukler and Smith's counter-current flow experiment, and a Westinghouse Electric Company FLECHT-SEASET forced reflood experiment, are presented and discussed

  9. The Development of Web-based Graphical User Interface for Unified Modeling Data with Multi (Correlated) Responses

    Science.gov (United States)

    Made Tirta, I.; Anggraeni, Dian

    2018-04-01

    Statistical models have been developed rapidly into various directions to accommodate various types of data. Data collected from longitudinal, repeated measured, clustered data (either continuous, binary, count, or ordinal), are more likely to be correlated. Therefore statistical model for independent responses, such as Generalized Linear Model (GLM), Generalized Additive Model (GAM) are not appropriate. There are several models available to apply for correlated responses including GEEs (Generalized Estimating Equations), for marginal model and various mixed effect model such as GLMM (Generalized Linear Mixed Models) and HGLM (Hierarchical Generalized Linear Models) for subject spesific models. These models are available on free open source software R, but they can only be accessed through command line interface (using scrit). On the othe hand, most practical researchers very much rely on menu based or Graphical User Interface (GUI). We develop, using Shiny framework, standard pull down menu Web-GUI that unifies most models for correlated responses. The Web-GUI has accomodated almost all needed features. It enables users to do and compare various modeling for repeated measure data (GEE, GLMM, HGLM, GEE for nominal responses) much more easily trough online menus. This paper discusses the features of the Web-GUI and illustrates the use of them. In General we find that GEE, GLMM, HGLM gave very closed results.

  10. 3D CFD computations of transitional flows using DES and a correlation based transition model; Wind turbines

    Energy Technology Data Exchange (ETDEWEB)

    Soerensen, Niels N.

    2009-07-15

    The report describes the application of the correlation based transition model of Menter et. al. [1, 2] to the cylinder drag crisis and the stalled flow over an DU-96-W-351 airfoil using the DES methodology. When predicting the flow over airfoils and rotors, the laminar-turbulent transition process can be important for the aerodynamic performance. Today, the most widespread approach is to use fully turbulent computations, where the transitional process is ignored and the entire boundary layer on the wings or airfoils is handled by the turbulence model. The correlation based transition model has lately shown promising results, and the present paper describes the application of the model to predict the drag and shedding frequency for flow around a cylinder from sub to super-critical Reynolds numbers. Additionally, the model is applied to the flow around the DU-96 airfoil, at high angles of attack. (au)

  11. Complex Coronary Hemodynamics - Simple Analog Modelling as an Educational Tool.

    Science.gov (United States)

    Parikh, Gaurav R; Peter, Elvis; Kakouros, Nikolaos

    2017-01-01

    Invasive coronary angiography remains the cornerstone for evaluation of coronary stenoses despite there being a poor correlation between luminal loss assessment by coronary luminography and myocardial ischemia. This is especially true for coronary lesions deemed moderate by visual assessment. Coronary pressure-derived fractional flow reserve (FFR) has emerged as the gold standard for the evaluation of hemodynamic significance of coronary artery stenosis, which is cost effective and leads to improved patient outcomes. There are, however, several limitations to the use of FFR including the evaluation of serial stenoses. In this article, we discuss the electronic-hydraulic analogy and the utility of simple electrical modelling to mimic the coronary circulation and coronary stenoses. We exemplify the effect of tandem coronary lesions on the FFR by modelling of a patient with sequential disease segments and complex anatomy. We believe that such computational modelling can serve as a powerful educational tool to help clinicians better understand the complexity of coronary hemodynamics and improve patient care.

  12. Quenched Large Deviations for Simple Random Walks on Percolation Clusters Including Long-Range Correlations

    Science.gov (United States)

    Berger, Noam; Mukherjee, Chiranjib; Okamura, Kazuki

    2018-03-01

    We prove a quenched large deviation principle (LDP) for a simple random walk on a supercritical percolation cluster (SRWPC) on {Z^d} ({d ≥ 2}). The models under interest include classical Bernoulli bond and site percolation as well as models that exhibit long range correlations, like the random cluster model, the random interlacement and the vacant set of random interlacements (for {d ≥ 3}) and the level sets of the Gaussian free field ({d≥ 3}). Inspired by the methods developed by Kosygina et al. (Commun Pure Appl Math 59:1489-1521, 2006) for proving quenched LDP for elliptic diffusions with a random drift, and by Yilmaz (Commun Pure Appl Math 62(8):1033-1075, 2009) and Rosenbluth (Quenched large deviations for multidimensional random walks in a random environment: a variational formula. Ph.D. thesis, NYU, arXiv:0804.1444v1) for similar results regarding elliptic random walks in random environment, we take the point of view of the moving particle and prove a large deviation principle for the quenched distribution of the pair empirical measures of the environment Markov chain in the non-elliptic case of SRWPC. Via a contraction principle, this reduces easily to a quenched LDP for the distribution of the mean velocity of the random walk and both rate functions admit explicit variational formulas. The main difficulty in our set up lies in the inherent non-ellipticity as well as the lack of translation-invariance stemming from conditioning on the fact that the origin belongs to the infinite cluster. We develop a unifying approach for proving quenched large deviations for SRWPC based on exploiting coercivity properties of the relative entropies in the context of convex variational analysis, combined with input from ergodic theory and invoking geometric properties of the supercritical percolation cluster.

  13. Simple sorting algorithm test based on CUDA

    OpenAIRE

    Meng, Hongyu; Guo, Fangjin

    2015-01-01

    With the development of computing technology, CUDA has become a very important tool. In computer programming, sorting algorithm is widely used. There are many simple sorting algorithms such as enumeration sort, bubble sort and merge sort. In this paper, we test some simple sorting algorithm based on CUDA and draw some useful conclusions.

  14. Is there any correlation between model-based perfusion parameters and model-free parameters of time-signal intensity curve on dynamic contrast enhanced MRI in breast cancer patients?

    Energy Technology Data Exchange (ETDEWEB)

    Yi, Boram; Kang, Doo Kyoung; Kim, Tae Hee [Ajou University School of Medicine, Department of Radiology, Suwon, Gyeonggi-do (Korea, Republic of); Yoon, Dukyong [Ajou University School of Medicine, Department of Biomedical Informatics, Suwon (Korea, Republic of); Jung, Yong Sik; Kim, Ku Sang [Ajou University School of Medicine, Department of Surgery, Suwon (Korea, Republic of); Yim, Hyunee [Ajou University School of Medicine, Department of Pathology, Suwon (Korea, Republic of)

    2014-05-15

    To find out any correlation between dynamic contrast-enhanced (DCE) model-based parameters and model-free parameters, and evaluate correlations between perfusion parameters with histologic prognostic factors. Model-based parameters (Ktrans, Kep and Ve) of 102 invasive ductal carcinomas were obtained using DCE-MRI and post-processing software. Correlations between model-based and model-free parameters and between perfusion parameters and histologic prognostic factors were analysed. Mean Kep was significantly higher in cancers showing initial rapid enhancement (P = 0.002) and a delayed washout pattern (P = 0.001). Ve was significantly lower in cancers showing a delayed washout pattern (P = 0.015). Kep significantly correlated with time to peak enhancement (TTP) (ρ = -0.33, P < 0.001) and washout slope (ρ = 0.39, P = 0.002). Ve was significantly correlated with TTP (ρ = 0.33, P = 0.002). Mean Kep was higher in tumours with high nuclear grade (P = 0.017). Mean Ve was lower in tumours with high histologic grade (P = 0.005) and in tumours with negative oestrogen receptor status (P = 0.047). TTP was shorter in tumours with negative oestrogen receptor status (P = 0.037). We could acquire general information about the tumour vascular physiology, interstitial space volume and pathologic prognostic factors by analyzing time-signal intensity curve without a complicated acquisition process for the model-based parameters. (orig.)

  15. Some practical considerations in finite element-based digital image correlation

    KAUST Repository

    Wang, Bo

    2015-04-20

    As an alternative to subset-based digital image correlation (DIC), finite element-based (FE-based) DIC method has gained increasing attention in the experimental mechanics community. However, the literature survey reveals that some important issues have not been well addressed in the published literatures. This work therefore aims to point out a few important considerations in the practical algorithm implementation of the FE-based DIC method, along with simple but effective solutions that can effectively tackle these issues. First, to better accommodate the possible intensity variations of the deformed images practically occurred in real experiments, a robust zero-mean normalized sum of squared difference criterion, instead of the commonly used sum of squared difference criterion, is introduced to quantify the similarity between reference and deformed elements in FE-based DIC. Second, to reduce the bias error induced by image noise and imperfect intensity interpolation, low-pass filtering of the speckle images with a 5×5 pixels Gaussian filter prior to correlation analysis, is presented. Third, to ensure the iterative calculation of FE-based DIC converges correctly and rapidly, an efficient subset-based DIC method, instead of simple integer-pixel displacement searching, is used to provide accurate initial guess of deformation for each calculation point. Also, the effects of various convergence criteria on the efficiency and accuracy of FE-based DIC are carefully examined, and a proper convergence criterion is recommended. The efficacy of these solutions is verified by numerical and real experiments. The results reveal that the improved FE-based DIC offers evident advantages over existing FE-based DIC method in terms of accuracy and efficiency. © 2015 Elsevier Ltd. All rights reserved.

  16. Modified Regression Correlation Coefficient for Poisson Regression Model

    Science.gov (United States)

    Kaengthong, Nattacha; Domthong, Uthumporn

    2017-09-01

    This study gives attention to indicators in predictive power of the Generalized Linear Model (GLM) which are widely used; however, often having some restrictions. We are interested in regression correlation coefficient for a Poisson regression model. This is a measure of predictive power, and defined by the relationship between the dependent variable (Y) and the expected value of the dependent variable given the independent variables [E(Y|X)] for the Poisson regression model. The dependent variable is distributed as Poisson. The purpose of this research was modifying regression correlation coefficient for Poisson regression model. We also compare the proposed modified regression correlation coefficient with the traditional regression correlation coefficient in the case of two or more independent variables, and having multicollinearity in independent variables. The result shows that the proposed regression correlation coefficient is better than the traditional regression correlation coefficient based on Bias and the Root Mean Square Error (RMSE).

  17. Simple models for the simulation of submarine melt for a Greenland glacial system model

    Science.gov (United States)

    Beckmann, Johanna; Perrette, Mahé; Ganopolski, Andrey

    2018-01-01

    Two hundred marine-terminating Greenland outlet glaciers deliver more than half of the annually accumulated ice into the ocean and have played an important role in the Greenland ice sheet mass loss observed since the mid-1990s. Submarine melt may play a crucial role in the mass balance and position of the grounding line of these outlet glaciers. As the ocean warms, it is expected that submarine melt will increase, potentially driving outlet glaciers retreat and contributing to sea level rise. Projections of the future contribution of outlet glaciers to sea level rise are hampered by the necessity to use models with extremely high resolution of the order of a few hundred meters. That requirement in not only demanded when modeling outlet glaciers as a stand alone model but also when coupling them with high-resolution 3-D ocean models. In addition, fjord bathymetry data are mostly missing or inaccurate (errors of several hundreds of meters), which questions the benefit of using computationally expensive 3-D models for future predictions. Here we propose an alternative approach built on the use of a computationally efficient simple model of submarine melt based on turbulent plume theory. We show that such a simple model is in reasonable agreement with several available modeling studies. We performed a suite of experiments to analyze sensitivity of these simple models to model parameters and climate characteristics. We found that the computationally cheap plume model demonstrates qualitatively similar behavior as 3-D general circulation models. To match results of the 3-D models in a quantitative manner, a scaling factor of the order of 1 is needed for the plume models. We applied this approach to model submarine melt for six representative Greenland glaciers and found that the application of a line plume can produce submarine melt compatible with observational data. Our results show that the line plume model is more appropriate than the cone plume model for simulating

  18. QSAR modelling using combined simple competitive learning networks and RBF neural networks.

    Science.gov (United States)

    Sheikhpour, R; Sarram, M A; Rezaeian, M; Sheikhpour, E

    2018-04-01

    The aim of this study was to propose a QSAR modelling approach based on the combination of simple competitive learning (SCL) networks with radial basis function (RBF) neural networks for predicting the biological activity of chemical compounds. The proposed QSAR method consisted of two phases. In the first phase, an SCL network was applied to determine the centres of an RBF neural network. In the second phase, the RBF neural network was used to predict the biological activity of various phenols and Rho kinase (ROCK) inhibitors. The predictive ability of the proposed QSAR models was evaluated and compared with other QSAR models using external validation. The results of this study showed that the proposed QSAR modelling approach leads to better performances than other models in predicting the biological activity of chemical compounds. This indicated the efficiency of simple competitive learning networks in determining the centres of RBF neural networks.

  19. Correlation of recent fission product release data

    International Nuclear Information System (INIS)

    Kress, T.S.; Lorenz, R.A.; Nakamura, T.; Osborne, M.F.

    1989-01-01

    For the calculation of source terms associated with severe accidents, it is necessary to model the release of fission products from fuel as it heats and melts. Perhaps the most definitive model for fission product release is that of the FASTGRASS computer code developed at Argonne National Laboratory. There is persuasive evidence that these processes, as well as additional chemical and gas phase mass transport processes, are important in the release of fission products from fuel. Nevertheless, it has been found convenient to have simplified fission product release correlations that may not be as definitive as models like FASTGRASS but which attempt in some simple way to capture the essence of the mechanisms. One of the most widely used such correlation is called CORSOR-M which is the present fission product/aerosol release model used in the NRC Source Term Code Package. CORSOR has been criticized as having too much uncertainty in the calculated releases and as not accurately reproducing some experimental data. It is currently believed that these discrepancies between CORSOR and the more recent data have resulted because of the better time resolution of the more recent data compared to the data base that went into the CORSOR correlation. This document discusses a simple correlational model for use in connection with NUREG risk uncertainty exercises. 8 refs., 4 figs., 1 tab

  20. A Simple Plasma Retinol Isotope Ratio Method for Estimating β-Carotene Relative Bioefficacy in Humans: Validation with the Use of Model-Based Compartmental Analysis.

    Science.gov (United States)

    Ford, Jennifer Lynn; Green, Joanne Balmer; Lietz, Georg; Oxley, Anthony; Green, Michael H

    2017-09-01

    Background: Provitamin A carotenoids are an important source of dietary vitamin A for many populations. Thus, accurate and simple methods for estimating carotenoid bioefficacy are needed to evaluate the vitamin A value of test solutions and plant sources. β-Carotene bioefficacy is often estimated from the ratio of the areas under plasma isotope response curves after subjects ingest labeled β-carotene and a labeled retinyl acetate reference dose [isotope reference method (IRM)], but to our knowledge, the method has not yet been evaluated for accuracy. Objectives: Our objectives were to develop and test a physiologically based compartmental model that includes both absorptive and postabsorptive β-carotene bioconversion and to use the model to evaluate the accuracy of the IRM and a simple plasma retinol isotope ratio [(RIR), labeled β-carotene-derived retinol/labeled reference-dose-derived retinol in one plasma sample] for estimating relative bioefficacy. Methods: We used model-based compartmental analysis (Simulation, Analysis and Modeling software) to develop and apply a model that provided known values for β-carotene bioefficacy. Theoretical data for 10 subjects were generated by the model and used to determine bioefficacy by RIR and IRM; predictions were compared with known values. We also applied RIR and IRM to previously published data. Results: Plasma RIR accurately predicted β-carotene relative bioefficacy at 14 d or later. IRM also accurately predicted bioefficacy by 14 d, except that, when there was substantial postabsorptive bioconversion, IRM underestimated bioefficacy. Based on our model, 1-d predictions of relative bioefficacy include absorptive plus a portion of early postabsorptive conversion. Conclusion: The plasma RIR is a simple tracer method that accurately predicts β-carotene relative bioefficacy based on analysis of one blood sample obtained at ≥14 d after co-ingestion of labeled β-carotene and retinyl acetate. The method also provides

  1. A simple stationary semi-analytical wake model

    DEFF Research Database (Denmark)

    Larsen, Gunner Chr.

    We present an idealized simple, but fast, semi-analytical algorithm for computation of stationary wind farm wind fields with a possible potential within a multi-fidelity strategy for wind farm topology optimization. Basically, the model considers wakes as linear perturbations on the ambient non......-linear. With each of these approached, a parabolic system are described, which is initiated by first considering the most upwind located turbines and subsequently successively solved in the downstream direction. Algorithms for the resulting wind farm flow fields are proposed, and it is shown that in the limit......-uniform mean wind field, although the modelling of the individual stationary wake flow fields includes non-linear terms. The simulation of the individual wake contributions are based on an analytical solution of the thin shear layer approximation of the NS equations. The wake flow fields are assumed...

  2. A simple operational gas release and swelling model. Pt. 1

    International Nuclear Information System (INIS)

    Wood, M.H.; Matthews, J.R.

    1980-01-01

    A new and simple model of fission gas release and swelling has been developed for oxide nuclear fuel under operational conditions. The model, which is to be incorporated into a fuel element behaviour code, is physically based and applicable to fuel at both thermal and fast reactor ratings. In this paper we present that part of the model describing the behaviour of intragranular gas: a future paper will detail the treatment of the grain boundary gas. The results of model calculations are compared with recent experimental observations of intragranular bubble concentrations and sizes, and gas release from fuel irradiated under isothermal conditions. Good agreement is found between experiment and theory. (orig.)

  3. A simple model of hysteresis behavior using spreadsheet analysis

    Science.gov (United States)

    Ehrmann, A.; Blachowicz, T.

    2015-01-01

    Hysteresis loops occur in many scientific and technical problems, especially as field dependent magnetization of ferromagnetic materials, but also as stress-strain-curves of materials measured by tensile tests including thermal effects, liquid-solid phase transitions, in cell biology or economics. While several mathematical models exist which aim to calculate hysteresis energies and other parameters, here we offer a simple model for a general hysteretic system, showing different hysteresis loops depending on the defined parameters. The calculation which is based on basic spreadsheet analysis plus an easy macro code can be used by students to understand how these systems work and how the parameters influence the reactions of the system on an external field. Importantly, in the step-by-step mode, each change of the system state, compared to the last step, becomes visible. The simple program can be developed further by several changes and additions, enabling the building of a tool which is capable of answering real physical questions in the broad field of magnetism as well as in other scientific areas, in which similar hysteresis loops occur.

  4. A simple model of hysteresis behavior using spreadsheet analysis

    International Nuclear Information System (INIS)

    Ehrmann, A; Blachowicz, T

    2015-01-01

    Hysteresis loops occur in many scientific and technical problems, especially as field dependent magnetization of ferromagnetic materials, but also as stress-strain-curves of materials measured by tensile tests including thermal effects, liquid-solid phase transitions, in cell biology or economics. While several mathematical models exist which aim to calculate hysteresis energies and other parameters, here we offer a simple model for a general hysteretic system, showing different hysteresis loops depending on the defined parameters. The calculation which is based on basic spreadsheet analysis plus an easy macro code can be used by students to understand how these systems work and how the parameters influence the reactions of the system on an external field. Importantly, in the step-by-step mode, each change of the system state, compared to the last step, becomes visible. The simple program can be developed further by several changes and additions, enabling the building of a tool which is capable of answering real physical questions in the broad field of magnetism as well as in other scientific areas, in which similar hysteresis loops occur

  5. Fine reservoir structure modeling based upon 3D visualized stratigraphic correlation between horizontal wells: methodology and its application

    Science.gov (United States)

    Chenghua, Ou; Chaochun, Li; Siyuan, Huang; Sheng, James J.; Yuan, Xu

    2017-12-01

    As the platform-based horizontal well production mode has been widely applied in petroleum industry, building a reliable fine reservoir structure model by using horizontal well stratigraphic correlation has become very important. Horizontal wells usually extend between the upper and bottom boundaries of the target formation, with limited penetration points. Using these limited penetration points to conduct well deviation correction means the formation depth information obtained is not accurate, which makes it hard to build a fine structure model. In order to solve this problem, a method of fine reservoir structure modeling, based on 3D visualized stratigraphic correlation among horizontal wells, is proposed. This method can increase the accuracy when estimating the depth of the penetration points, and can also effectively predict the top and bottom interfaces in the horizontal penetrating section. Moreover, this method will greatly increase not only the number of points of depth data available, but also the accuracy of these data, which achieves the goal of building a reliable fine reservoir structure model by using the stratigraphic correlation among horizontal wells. Using this method, four 3D fine structure layer models have been successfully built of a specimen shale gas field with platform-based horizontal well production mode. The shale gas field is located to the east of Sichuan Basin, China; the successful application of the method has proven its feasibility and reliability.

  6. Implementation and testing of a simple data assimilation algorithm in the regional air pollution forecast model, DEOM

    Directory of Open Access Journals (Sweden)

    J. Frydendall

    2009-08-01

    Full Text Available A simple data assimilation algorithm based on statistical interpolation has been developed and coupled to a long-range chemistry transport model, the Danish Eulerian Operational Model (DEOM, applied for air pollution forecasting at the National Environmental Research Institute (NERI, Denmark. In this paper, the algorithm and the results from experiments designed to find the optimal setup of the algorithm are described. The algorithm has been developed and optimized via eight different experiments where the results from different model setups have been tested against measurements from the EMEP (European Monitoring and Evaluation Programme network covering a half-year period, April–September 1999. The best performing setup of the data assimilation algorithm for surface ozone concentrations has been found, including the combination of determining the covariances using the Hollingsworth method, varying the correlation length according to the number of adjacent observation stations and applying the assimilation routine at three successive hours during the morning. Improvements in the correlation coefficient in the range of 0.1 to 0.21 between the results from the reference and the optimal configuration of the data assimilation algorithm, were found. The data assimilation algorithm will in the future be used in the operational THOR integrated air pollution forecast system, which includes the DEOM.

  7. Climate stability and sensitivity in some simple conceptual models

    Energy Technology Data Exchange (ETDEWEB)

    Bates, J. Ray [University College Dublin, Meteorology and Climate Centre, School of Mathematical Sciences, Dublin (Ireland)

    2012-02-15

    A theoretical investigation of climate stability and sensitivity is carried out using three simple linearized models based on the top-of-the-atmosphere energy budget. The simplest is the zero-dimensional model (ZDM) commonly used as a conceptual basis for climate sensitivity and feedback studies. The others are two-zone models with tropics and extratropics of equal area; in the first of these (Model A), the dynamical heat transport (DHT) between the zones is implicit, in the second (Model B) it is explicitly parameterized. It is found that the stability and sensitivity properties of the ZDM and Model A are very similar, both depending only on the global-mean radiative response coefficient and the global-mean forcing. The corresponding properties of Model B are more complex, depending asymmetrically on the separate tropical and extratropical values of these quantities, as well as on the DHT coefficient. Adopting Model B as a benchmark, conditions are found under which the validity of the ZDM and Model A as climate sensitivity models holds. It is shown that parameter ranges of physical interest exist for which such validity may not hold. The 2 x CO{sub 2} sensitivities of the simple models are studied and compared. Possible implications of the results for sensitivities derived from GCMs and palaeoclimate data are suggested. Sensitivities for more general scenarios that include negative forcing in the tropics (due to aerosols, inadvertent or geoengineered) are also studied. Some unexpected outcomes are found in this case. These include the possibility of a negative global-mean temperature response to a positive global-mean forcing, and vice versa. (orig.)

  8. Correlation of Fukushima data with SSI models

    International Nuclear Information System (INIS)

    Miller, C.A.; Costantino, C.J.; Philippacopoulos, A.J.

    1985-01-01

    The seismic response of nuclear power plant structures is often calculated using lumped parameter methods. A finite element model of the structure is coupled to the soil with a spring-dashpot system used to represent the interaction process. The parameters of the interaction model are based on analytic solutions to simple problems which are idealizations of the actual problem. The objective of this work is to compare predicted response using the standard lumped parameter models with experimental data. These comparisons are shown to be good for fairly uniform soil systems. (orig.)

  9. Improved Analysis of Earth System Models and Observations using Simple Climate Models

    Science.gov (United States)

    Nadiga, B. T.; Urban, N. M.

    2016-12-01

    Earth system models (ESM) are the most comprehensive tools we have to study climate change and develop climate projections. However, the computational infrastructure required and the cost incurred in running such ESMs precludes direct use of such models in conjunction with a wide variety of tools that can further our understanding of climate. Here we are referring to tools that range from dynamical systems tools that give insight into underlying flow structure and topology to tools that come from various applied mathematical and statistical techniques and are central to quantifying stability, sensitivity, uncertainty and predictability to machine learning tools that are now being rapidly developed or improved. Our approach to facilitate the use of such models is to analyze output of ESM experiments (cf. CMIP) using a range of simpler models that consider integral balances of important quantities such as mass and/or energy in a Bayesian framework.We highlight the use of this approach in the context of the uptake of heat by the world oceans in the ongoing global warming. Indeed, since in excess of 90% of the anomalous radiative forcing due greenhouse gas emissions is sequestered in the world oceans, the nature of ocean heat uptake crucially determines the surface warming that is realized (cf. climate sensitivity). Nevertheless, ESMs themselves are never run long enough to directly assess climate sensitivity. So, we consider a range of models based on integral balances--balances that have to be realized in all first-principles based models of the climate system including the most detailed state-of-the art climate simulations. The models range from simple models of energy balance to those that consider dynamically important ocean processes such as the conveyor-belt circulation (Meridional Overturning Circulation, MOC), North Atlantic Deep Water (NADW) formation, Antarctic Circumpolar Current (ACC) and eddy mixing. Results from Bayesian analysis of such models using

  10. A simple 2D biofilm model yields a variety of morphological features.

    Science.gov (United States)

    Hermanowicz, S W

    2001-01-01

    A two-dimensional biofilm model was developed based on the concept of cellular automata. Three simple, generic processes were included in the model: cell growth, internal and external mass transport and cell detachment (erosion). The model generated a diverse range of biofilm morphologies (from dense layers to open, mushroom-like forms) similar to those observed in real biofilm systems. Bulk nutrient concentration and external mass transfer resistance had a large influence on the biofilm structure.

  11. A simple and efficient parallel FFT algorithm using the BSP model

    NARCIS (Netherlands)

    Bisseling, R.H.; Inda, M.A.

    2000-01-01

    In this paper we present a new parallel radix FFT algorithm based on the BSP model Our parallel algorithm uses the groupcyclic distribution family which makes it simple to understand and easy to implement We show how to reduce the com munication cost of the algorithm by a factor of three in the case

  12. Interpretation of photocurrent correlation measurements used for ultrafast photoconductive switch characterization

    DEFF Research Database (Denmark)

    Jacobsen, R. H.; Birkelund, Karen; Holst, T.

    1996-01-01

    of the switch. By using both photocurrent measurements and terahertz spectroscopy we verify the importance of space-charge effects on the carrier dynamics. Photocurrent nonlinearities and coherent effects are discussed as they appear in the correlation signals. An analysis based on a simple model allows......Photocurrent correlation measurements used for the characterization of ultrafast photoconductive switches based on GaAs and silicon-on-sapphire are demonstrated. The correlation signal arises from the interplay of the photoexcited carriers, the dynamics of the bias field and a subsequent recharging...

  13. The attentional drift-diffusion model extends to simple purchasing decisions.

    Science.gov (United States)

    Krajbich, Ian; Lu, Dingchao; Camerer, Colin; Rangel, Antonio

    2012-01-01

    How do we make simple purchasing decisions (e.g., whether or not to buy a product at a given price)? Previous work has shown that the attentional drift-diffusion model (aDDM) can provide accurate quantitative descriptions of the psychometric data for binary and trinary value-based choices, and of how the choice process is guided by visual attention. Here we extend the aDDM to the case of purchasing decisions, and test it using an eye-tracking experiment. We find that the model also provides a reasonably accurate quantitative description of the relationship between choice, reaction time, and visual fixations using parameters that are very similar to those that best fit the previous data. The only critical difference is that the choice biases induced by the fixations are about half as big in purchasing decisions as in binary choices. This suggests that a similar computational process is used to make binary choices, trinary choices, and simple purchasing decisions.

  14. A Simple Hybrid Model for Short-Term Load Forecasting

    Directory of Open Access Journals (Sweden)

    Suseelatha Annamareddi

    2013-01-01

    Full Text Available The paper proposes a simple hybrid model to forecast the electrical load data based on the wavelet transform technique and double exponential smoothing. The historical noisy load series data is decomposed into deterministic and fluctuation components using suitable wavelet coefficient thresholds and wavelet reconstruction method. The variation characteristics of the resulting series are analyzed to arrive at reasonable thresholds that yield good denoising results. The constitutive series are then forecasted using appropriate exponential adaptive smoothing models. A case study performed on California energy market data demonstrates that the proposed method can offer high forecasting precision for very short-term forecasts, considering a time horizon of two weeks.

  15. Water nanoelectrolysis: A simple model

    Science.gov (United States)

    Olives, Juan; Hammadi, Zoubida; Morin, Roger; Lapena, Laurent

    2017-12-01

    A simple model of water nanoelectrolysis—defined as the nanolocalization at a single point of any electrolysis phenomenon—is presented. It is based on the electron tunneling assisted by the electric field through the thin film of water molecules (˜0.3 nm thick) at the surface of a tip-shaped nanoelectrode (micrometric to nanometric curvature radius at the apex). By applying, e.g., an electric potential V1 during a finite time t1, and then the potential -V1 during the same time t1, we show that there are three distinct regions in the plane (t1, V1): one for the nanolocalization (at the apex of the nanoelectrode) of the electrolysis oxidation reaction, the second one for the nanolocalization of the reduction reaction, and the third one for the nanolocalization of the production of bubbles. These parameters t1 and V1 completely control the time at which the electrolysis reaction (of oxidation or reduction) begins, the duration of this reaction, the electrolysis current intensity (i.e., the tunneling current), the number of produced O2 or H2 molecules, and the radius of the nanolocalized bubbles. The model is in good agreement with our experiments.

  16. RELAP5/MOD2 models and correlations

    International Nuclear Information System (INIS)

    Dimenna, R.A.; Larson, J.R.; Johnson, R.W.; Larson, T.K.; Miller, C.S.; Streit, J.E.; Hanson, R.G.; Kiser, D.M.

    1988-08-01

    A review of the RELAP5/MOD2 computer code has been performed to assess the basis for the models and correlations comprising the code. The review has included verification of the original data base, including thermodynamic, thermal-hydraulic, and geothermal conditions; simplifying assumptions in implementation or application; and accuracy of implementation compared to documented descriptions of each of the models. An effort has been made to provide the reader with an understanding of what is in the code and why it is there and to provide enough information that an analyst can assess the impact of the correlation or model on the ability of the code to represent the physics of a reactor transient. Where assessment of the implemented versions of the models or correlations has been accomplished and published, the assessment results have been included

  17. Physics-based models for measurement correlations: application to an inverse Sturm–Liouville problem

    International Nuclear Information System (INIS)

    Bal, Guillaume; Ren Kui

    2009-01-01

    In many inverse problems, the measurement operator, which maps objects of interest to available measurements, is a smoothing (regularizing) operator. Its inverse is therefore unbounded and as a consequence, only the low-frequency component of the object of interest is accessible from inevitably noisy measurements. In many inverse problems however, the neglected high-frequency component may significantly affect the measured data. Using simple scaling arguments, we characterize the influence of the high-frequency component. We then consider situations where the correlation function of such an influence may be estimated by asymptotic expansions, for instance as a random corrector in homogenization theory. This allows us to consistently eliminate the high-frequency component and derive a closed form, more accurate, inverse problem for the low-frequency component of the object of interest. We present the asymptotic expression of the correlation matrix of the eigenvalues in a Sturm–Liouville problem with unknown potential. We propose an iterative algorithm for the reconstruction of the potential from knowledge of the eigenvalues and show that using the approximate correlation matrix significantly improves the reconstructions

  18. Simple unification

    International Nuclear Information System (INIS)

    Ponce, W.A.; Zepeda, A.

    1987-08-01

    We present the results obtained from our systematic search of a simple Lie group that unifies weak and electromagnetic interactions in a single truly unified theory. We work with fractionally charged quarks, and allow for particles and antiparticles to belong to the same irreducible representation. We found that models based on SU(6), SU(7), SU(8) and SU(10) are viable candidates for simple unification. (author). 23 refs

  19. On two-point boundary correlations in the six-vertex model with domain wall boundary conditions

    Science.gov (United States)

    Colomo, F.; Pronko, A. G.

    2005-05-01

    The six-vertex model with domain wall boundary conditions on an N × N square lattice is considered. The two-point correlation function describing the probability of having two vertices in a given state at opposite (top and bottom) boundaries of the lattice is calculated. It is shown that this two-point boundary correlator is expressible in a very simple way in terms of the one-point boundary correlators of the model on N × N and (N - 1) × (N - 1) lattices. In alternating sign matrix (ASM) language this result implies that the doubly refined x-enumerations of ASMs are just appropriate combinations of the singly refined ones.

  20. Chinese and world equity markets : A review of the volatilities and correlations in the first fifteen years

    NARCIS (Netherlands)

    Lin, Kuan Pin; Menkveld, Albert J.; Yang, Zhishu

    2009-01-01

    After more than 15 years of Chinese equity markets, we study how variance, covariance, and correlations have developed in these markets relative to world markets, based on the dynamic conditional correlation (DCC) model of Engle [Engle, R., 2002. A dynamic conditional correlation: A simple class of

  1. Pre-analysis techniques applied to area-based correlation aiming Digital Terrain Model generation

    Directory of Open Access Journals (Sweden)

    Maurício Galo

    2005-12-01

    Full Text Available Area-based matching is an useful procedure in some photogrammetric processes and its results are of crucial importance in applications such as relative orientation, phototriangulation and Digital Terrain Model generation. The successful determination of correspondence depends on radiometric and geometric factors. Considering these aspects, the use of procedures that previously estimate the quality of the parameters to be computed is a relevant issue. This paper describes these procedures and it is shown that the quality prediction can be computed before performing matching by correlation, trough the analysis of the reference window. This procedure can be incorporated in the correspondence process for Digital Terrain Model generation and Phototriangulation. The proposed approach comprises the estimation of the variance matrix of the translations from the gray levels in the reference window and the reduction of the search space using the knowledge of the epipolar geometry. As a consequence, the correlation process becomes more reliable, avoiding the application of matching procedures in doubtful areas. Some experiments with simulated and real data are presented, evidencing the efficiency of the studied strategy.

  2. Occurrence and simulation of trihalomethanes in swimming pool water: A simple prediction method based on DOC and mass balance.

    Science.gov (United States)

    Peng, Di; Saravia, Florencia; Abbt-Braun, Gudrun; Horn, Harald

    2016-01-01

    Trihalomethanes (THM) are the most typical disinfection by-products (DBPs) found in public swimming pool water. DBPs are produced when organic and inorganic matter in water reacts with chemical disinfectants. The irregular contribution of substances from pool visitors and long contact time with disinfectant make the forecast of THM in pool water a challenge. In this work occurrence of THM in a public indoor swimming pool was investigated and correlated with the dissolved organic carbon (DOC). Daily sampling of pool water for 26 days showed a positive correlation between DOC and THM with a time delay of about two days, while THM and DOC didn't directly correlate with the number of visitors. Based on the results and mass-balance in the pool water, a simple simulation model for estimating THM concentration in indoor swimming pool water was proposed. Formation of THM from DOC, volatilization into air and elimination by pool water treatment were included in the simulation. Formation ratio of THM gained from laboratory analysis using native pool water and information from field study in an indoor swimming pool reduced the uncertainty of the simulation. The simulation was validated by measurements in the swimming pool for 50 days. The simulated results were in good compliance with measured results. This work provides a useful and simple method for predicting THM concentration and its accumulation trend for long term in indoor swimming pool water. Copyright © 2015 Elsevier Ltd. All rights reserved.

  3. Simple physics-based models of compensatory plant water uptake: concepts and eco-hydrological consequences

    Directory of Open Access Journals (Sweden)

    N. J. Jarvis

    2011-11-01

    Full Text Available Many land surface schemes and simulation models of plant growth designed for practical use employ simple empirical sub-models of root water uptake that cannot adequately reflect the critical role water uptake from sparsely rooted deep subsoil plays in meeting atmospheric transpiration demand in water-limited environments, especially in the presence of shallow groundwater. A failure to account for this so-called "compensatory" water uptake may have serious consequences for both local and global modeling of water and energy fluxes, carbon balances and climate. Some purely empirical compensatory root water uptake models have been proposed, but they are of limited use in global modeling exercises since their parameters cannot be related to measurable soil and vegetation properties. A parsimonious physics-based model of uptake compensation has been developed that requires no more parameters than empirical approaches. This model is described and some aspects of its behavior are illustrated with the help of example simulations. These analyses demonstrate that hydraulic lift can be considered as an extreme form of compensation and that the degree of compensation is principally a function of soil capillarity and the ratio of total effective root length to potential transpiration. Thus, uptake compensation increases as root to leaf area ratios increase, since potential transpiration depends on leaf area. Results of "scenario" simulations for two case studies, one at the local scale (riparian vegetation growing above shallow water tables in seasonally dry or arid climates and one at a global scale (water balances across an aridity gradient in the continental USA, are presented to illustrate biases in model predictions that arise when water uptake compensation is neglected. In the first case, it is shown that only a compensated model can match the strong relationships between water table depth and leaf area and transpiration observed in riparian forest

  4. Simple model for the dynamics towards metastable states

    International Nuclear Information System (INIS)

    Meijer, P.H.E.; Keskin, M.; Bodegom, E.

    1986-01-01

    Circumstances under which a quenched system will freeze in a metastable state are studied in simple systems with long-range order. The model used is the time-dependent pair approximation, based on the most probable path (MPP) method. The time dependence of the solution is shown by means of flow diagrams. The fixed points and other features of the differential equations in time are independent of the choice of the rate constants. It is explained qualitatively how the system behaves under varying descending temperatures: the role of the initial conditions, the dependence on the quenching rate, and the response to precooling

  5. Reconstruction of implanted marker trajectories from cone-beam CT projection images using interdimensional correlation modeling

    International Nuclear Information System (INIS)

    Chung, Hyekyun; Poulsen, Per Rugaard; Keall, Paul J.; Cho, Seungryong; Cho, Byungchul

    2016-01-01

    Purpose: Cone-beam CT (CBCT) is a widely used imaging modality for image-guided radiotherapy. Most vendors provide CBCT systems that are mounted on a linac gantry. Thus, CBCT can be used to estimate the actual 3-dimensional (3D) position of moving respiratory targets in the thoracic/abdominal region using 2D projection images. The authors have developed a method for estimating the 3D trajectory of respiratory-induced target motion from CBCT projection images using interdimensional correlation modeling. Methods: Because the superior–inferior (SI) motion of a target can be easily analyzed on projection images of a gantry-mounted CBCT system, the authors investigated the interdimensional correlation of the SI motion with left–right and anterior–posterior (AP) movements while the gantry is rotating. A simple linear model and a state-augmented model were implemented and applied to the interdimensional correlation analysis, and their performance was compared. The parameters of the interdimensional correlation models were determined by least-square estimation of the 2D error between the actual and estimated projected target position. The method was validated using 160 3D tumor trajectories from 46 thoracic/abdominal cancer patients obtained during CyberKnife treatment. The authors’ simulations assumed two application scenarios: (1) retrospective estimation for the purpose of moving tumor setup used just after volumetric matching with CBCT; and (2) on-the-fly estimation for the purpose of real-time target position estimation during gating or tracking delivery, either for full-rotation volumetric-modulated arc therapy (VMAT) in 60 s or a stationary six-field intensity-modulated radiation therapy (IMRT) with a beam delivery time of 20 s. Results: For the retrospective CBCT simulations, the mean 3D root-mean-square error (RMSE) for all 4893 trajectory segments was 0.41 mm (simple linear model) and 0.35 mm (state-augmented model). In the on-the-fly simulations, prior

  6. Reconstruction of implanted marker trajectories from cone-beam CT projection images using interdimensional correlation modeling

    Energy Technology Data Exchange (ETDEWEB)

    Chung, Hyekyun [Department of Nuclear and Quantum Engineering, Korea Advanced Institute of Science and Technology, Daejeon 34141, South Korea and Department of Radiation Oncology, Asan Medical Center, University of Ulsan College of Medicine, Seoul 138-736 (Korea, Republic of); Poulsen, Per Rugaard [Department of Oncology, Aarhus University Hospital, Nørrebrogade 44, 8000 Aarhus C (Denmark); Keall, Paul J. [Radiation Physics Laboratory, Sydney Medical School, University of Sydney, NSW 2006 (Australia); Cho, Seungryong [Department of Nuclear and Quantum Engineering, Korea Advanced Institute of Science and Technology, Daejeon 34141 (Korea, Republic of); Cho, Byungchul, E-mail: cho.byungchul@gmail.com, E-mail: bcho@amc.seoul.kr [Department of Radiation Oncology, Asan Medical Center, University of Ulsan College of Medicine, Seoul 05505 (Korea, Republic of)

    2016-08-15

    Purpose: Cone-beam CT (CBCT) is a widely used imaging modality for image-guided radiotherapy. Most vendors provide CBCT systems that are mounted on a linac gantry. Thus, CBCT can be used to estimate the actual 3-dimensional (3D) position of moving respiratory targets in the thoracic/abdominal region using 2D projection images. The authors have developed a method for estimating the 3D trajectory of respiratory-induced target motion from CBCT projection images using interdimensional correlation modeling. Methods: Because the superior–inferior (SI) motion of a target can be easily analyzed on projection images of a gantry-mounted CBCT system, the authors investigated the interdimensional correlation of the SI motion with left–right and anterior–posterior (AP) movements while the gantry is rotating. A simple linear model and a state-augmented model were implemented and applied to the interdimensional correlation analysis, and their performance was compared. The parameters of the interdimensional correlation models were determined by least-square estimation of the 2D error between the actual and estimated projected target position. The method was validated using 160 3D tumor trajectories from 46 thoracic/abdominal cancer patients obtained during CyberKnife treatment. The authors’ simulations assumed two application scenarios: (1) retrospective estimation for the purpose of moving tumor setup used just after volumetric matching with CBCT; and (2) on-the-fly estimation for the purpose of real-time target position estimation during gating or tracking delivery, either for full-rotation volumetric-modulated arc therapy (VMAT) in 60 s or a stationary six-field intensity-modulated radiation therapy (IMRT) with a beam delivery time of 20 s. Results: For the retrospective CBCT simulations, the mean 3D root-mean-square error (RMSE) for all 4893 trajectory segments was 0.41 mm (simple linear model) and 0.35 mm (state-augmented model). In the on-the-fly simulations, prior

  7. Distributing Correlation Coefficients of Linear Structure-Activity/Property Models

    Directory of Open Access Journals (Sweden)

    Sorana D. BOLBOACA

    2011-12-01

    Full Text Available Quantitative structure-activity/property relationships are mathematical relationships linking chemical structure and activity/property in a quantitative manner. These in silico approaches are frequently used to reduce animal testing and risk-assessment, as well as to increase time- and cost-effectiveness in characterization and identification of active compounds. The aim of our study was to investigate the pattern of correlation coefficients distribution associated to simple linear relationships linking the compounds structure with their activities. A set of the most common ordnance compounds found at naval facilities with a limited data set with a range of toxicities on aquatic ecosystem and a set of seven properties was studied. Statistically significant models were selected and investigated. The probability density function of the correlation coefficients was investigated using a series of possible continuous distribution laws. Almost 48% of the correlation coefficients proved fit Beta distribution, 40% fit Generalized Pareto distribution, and 12% fit Pert distribution.

  8. Bose-Einstein correlation in the Lund model

    International Nuclear Information System (INIS)

    Anderson, B.

    1998-01-01

    I will present the Lund Model fragmentation in a somewhat different way than what is usually done. It is true that the formulas are derived from (semi-)classical probability arguments, but they can be motivated in a quantum mechanical setting and it is in particular possible to derive a transition matrix element. I will present two scenarios, one based upon Schwinger tunneling and one upon Wilson loop operators. The results will coincide and throw some light upon the sizes of the three main phenomenological parameters which occur in the Lund Model. After that I will show that in this way it is possible to obtain a model for the celebrated Bose-Einstein correlations between two bosons with small relative momenta. This model will exhibit non-trivial two- and three-particle BE correlations, influence the observed p-spectrum and finally be different for charged and neutral pion correlations. (author)

  9. A simple rainfall-runoff model based on hydrological units applied to the Teba catchment (south-east Spain)

    Science.gov (United States)

    Donker, N. H. W.

    2001-01-01

    A hydrological model (YWB, yearly water balance) has been developed to model the daily rainfall-runoff relationship of the 202 km2 Teba river catchment, located in semi-arid south-eastern Spain. The period of available data (1976-1993) includes some very rainy years with intensive storms (responsible for flooding parts of the town of Malaga) and also some very dry years.The YWB model is in essence a simple tank model in which the catchment is subdivided into a limited number of meaningful hydrological units. Instead of generating per unit surface runoff resulting from infiltration excess, runoff has been made the result of storage excess. Actual evapotranspiration is obtained by means of curves, included in the software, representing the relationship between the ratio of actual to potential evapotranspiration as a function of soil moisture content for three soil texture classes.The total runoff generated is split between base flow and surface runoff according to a given baseflow index. The two components are routed separately and subsequently joined. A large number of sequential years can be processed, and the results of each year are summarized by a water balance table and a daily based rainfall runoff time series. An attempt has been made to restrict the amount of input data to the minimum.Interactive manual calibration is advocated in order to allow better incorporation of field evidence and the experience of the model user. Field observations allowed for an approximate calibration at the hydrological unit level.

  10. A simple rain attenuation model for earth-space radio links operating at 10-35 GHz

    Science.gov (United States)

    Stutzman, W. L.; Yon, K. M.

    1986-01-01

    The simple attenuation model has been improved from an earlier version and now includes the effect of wave polarization. The model is for the prediction of rain attenuation statistics on earth-space communication links operating in the 10-35 GHz band. Simple calculations produce attenuation values as a function of average rain rate. These together with rain rate statistics (either measured or predicted) can be used to predict annual rain attenuation statistics. In this paper model predictions are compared to measured data from a data base of 62 experiments performed in the U.S., Europe, and Japan. Comparisons are also made to predictions from other models.

  11. Auto-correlation based intelligent technique for complex waveform presentation and measurement

    International Nuclear Information System (INIS)

    Rana, K P S; Singh, R; Sayann, K S

    2009-01-01

    Waveform acquisition and presentation forms the heart of many measurement systems. Particularly, data acquisition and presentation of repeating complex signals like sine sweep and frequency-modulated signals introduces the challenge of waveform time period estimation and live waveform presentation. This paper presents an intelligent technique, for waveform period estimation of both the complex and simple waveforms, based on the normalized auto-correlation method. The proposed technique is demonstrated using LabVIEW based intensive simulations on several simple and complex waveforms. Implementation of the technique is successfully demonstrated using LabVIEW based virtual instrumentation. Sine sweep vibration waveforms are successfully presented and measured for electrodynamic shaker system generated vibrations. The proposed method is also suitable for digital storage oscilloscope (DSO) triggering, for complex signals acquisition and presentation. This intelligence can be embodied into the DSO, making it an intelligent measurement system, catering wide varieties of the waveforms. The proposed technique, simulation results, robustness study and implementation results are presented in this paper.

  12. A simple spatiotemporal chaotic Lotka-Volterra model

    International Nuclear Information System (INIS)

    Sprott, J.C.; Wildenberg, J.C.; Azizi, Yousef

    2005-01-01

    A mathematically simple example of a high-dimensional (many-species) Lotka-Volterra model that exhibits spatiotemporal chaos in one spatial dimension is described. The model consists of a closed ring of identical agents, each competing for fixed finite resources with two of its four nearest neighbors. The model is prototypical of more complicated models in its quasiperiodic route to chaos (including attracting 3-tori), bifurcations, spontaneous symmetry breaking, and spatial pattern formation

  13. Trans and cis influences and effects in cobalamins and in their simple models.

    Science.gov (United States)

    De March, Matteo; Demitri, Nicola; Geremia, Silvano; Hickey, Neal; Randaccio, Lucio

    2012-11-01

    The interligand interactions in coordination compounds have been principally interpreted in terms of cis and trans influences and effects, which can be defined as the ability of a ligand X to affect the bond of another ligand, cis or trans to X, to the metal. This review analyzes these effects/influences in cobalamins (XCbl) and their simple models cobaloximes, LCo(chel)X. Important properties of these complexes, such as geometry, stability, and reactivity, can be rationalized in terms of steric and electronic factors of the ligands. Experimental evidence of normal and inverse trans influence is described in alkylcobaloximes for the first time. The study of simple B(12) models has complemented that on the more complex cobalamins, with particular emphasis on the properties of the axial L-Co-X moiety. Some of the conclusions reached for the axial fragment of simple models have also been qualitatively detected in cobalamins and have furnished new insight into the as yet unestablished mechanism for the homolytic cleavage of the Co - C bond in the AdoCbl-based enzymes. Copyright © 2012 Elsevier Inc. All rights reserved.

  14. Simple Harmonics Motion experiment based on LabVIEW interface for Arduino

    Science.gov (United States)

    Tong-on, Anusorn; Saphet, Parinya; Thepnurat, Meechai

    2017-09-01

    In this work, we developed an affordable modern innovative physics lab apparatus. The ultrasonic sensor is used to measure the position of a mass attached on a spring as a function of time. The data acquisition system and control device were developed based on LabVIEW interface for Arduino UNO R3. The experiment was designed to explain wave propagation which is modeled by simple harmonic motion. The simple harmonic system (mass and spring) was observed and the motion can be realized using curve fitting to the wave equation in Mathematica. We found that the spring constants provided by Hooke’s law and the wave equation fit are 9.9402 and 9.1706 N/m, respectively.

  15. Correlation among electronegativity, cation polarizability, optical basicity and single bond strength of simple oxides

    Energy Technology Data Exchange (ETDEWEB)

    Dimitrov, Vesselin, E-mail: vesselin@uctm.edu [Department of Silicate Technology, University of Chemical Technology and Metallurgy, 8, Kl. Ohridski Blvd., Sofia 1756 (Bulgaria); Komatsu, Takayuki, E-mail: komatsu@mst.nagaokaut.ac.jp [Department of Materials Science and Technology, Nagaoka University of Technology, 1603-1 Kamitomioka-cho, Nagaoka 940-2188 (Japan)

    2012-12-15

    A suitable relationship between free-cation polarizability and electronegativity of elements in different valence states and with the most common coordination numbers has been searched on the basis of the similarity in physical nature of both quantities. In general, the cation polarizability increases with decreasing element electronegativity. A systematic periodic change in the polarizability against the electronegativity has been observed in the isoelectronic series. It has been found that generally the optical basicity increases and the single bond strength of simple oxides decreases with decreasing the electronegativity. The observed trends have been discussed on the basis of electron donation ability of the oxide ions and type of chemical bonding in simple oxides. - Graphical abstract: This figure shows the single bond strength of simple oxides as a function of element electronegativity. A remarkable correlation exists between these independently obtained quantities. High values of electronegativity correspond to high values of single bond strength and vice versa. It is obvious that the observed trend in this figure is closely related to the type of chemical bonding in corresponding oxide. Highlights: Black-Right-Pointing-Pointer A suitable relationship between free-cation polarizability and electronegativity of elements was searched. Black-Right-Pointing-Pointer The cation polarizability increases with decreasing element electronegativity. Black-Right-Pointing-Pointer The single bond strength of simple oxides decreases with decreasing the electronegativity. Black-Right-Pointing-Pointer The observed trends were discussed on the basis of type of chemical bonding in simple oxides.

  16. Simple intake and pharmacokinetic modeling to characterize exposure of Americans to perfluoroctanoic acid, PFOA.

    Science.gov (United States)

    Lorber, Matthew; Egeghy, Peter P

    2011-10-01

    Models for assessing intakes of perfluorooctanoic acid, PFOA, are described and applied. One model is based on exposure media concentrations and contact rates. This model is applied to general population exposures for adults and 2-year old children. The other model is a simple one-compartment, first-order pharmacokinetic (PK) model. Parameters for this model include a rate of elimination of PFOA and a blood volume of distribution. The model was applied to data from the National Health and Nutritional Examination Survey, NHANES, to backcalculate intakes. The central tendency intake estimate for adults and children based on exposure media concentrations and contact rates were 70 and 26 ng/day, respectively. The central tendency adult intake derived from NHANES data was 56 and 37 ng/day for males and females, respectively. Variability and uncertainty discussions regarding the intake modeling focus on lack of data on direct exposure to PFOA used in consumer products, precursor compounds, and food. Discussions regarding PK modeling focus on the range of blood measurements in NHANES, the appropriateness of the simple PK model, and the uncertainties associated with model parameters. Using the PK model, the 10th and 95th percentile long-term average adult intakes of PFOA are 15 and 130 ng/day.

  17. Towards a Simple Constitutive Model for Bread Dough

    Science.gov (United States)

    Tanner, Roger I.

    2008-07-01

    Wheat flour dough is an example of a soft solid material consisting of a gluten (rubbery) network with starch particles as a filler. The volume fraction of the starch filler is high-typically 60%. A computer-friendly constitutive model has been lacking for this type of material and here we report on progress towards finding such a model. The model must describe the response to small strains, simple shearing starting from rest, simple elongation, biaxial straining, recoil and various other transient flows. A viscoelastic Lodge-type model involving a damage function. which depends on strain from an initial reference state fits the given data well, and it is also able to predict the thickness at exit from dough sheeting, which has been a long-standing unsolved puzzle. The model also shows an apparent rate-dependent yield stress, although no explicit yield stress is built into the model. This behaviour agrees with the early (1934) observations of Schofield and Scott Blair on dough recoil after unloading.

  18. Healthy habits: efficacy of simple advice on weight control based on a habit-formation model.

    Science.gov (United States)

    Lally, P; Chipperfield, A; Wardle, J

    2008-04-01

    To evaluate the efficacy of a simple weight loss intervention, based on principles of habit formation. An exploratory trial in which overweight and obese adults were randomized either to a habit-based intervention condition (with two subgroups given weekly vs monthly weighing; n=33, n=36) or to a waiting-list control condition (n=35) over 8 weeks. Intervention participants were followed up for 8 months. A total of 104 adults (35 men, 69 women) with an average BMI of 30.9 kg m(-2). Intervention participants were given a leaflet containing advice on habit formation and simple recommendations for eating and activity behaviours promoting negative energy balance, together with a self-monitoring checklist. Weight change over 8 weeks in the intervention condition compared with the control condition and weight loss maintenance over 32 weeks in the intervention condition. At 8 weeks, people in the intervention condition had lost significantly more weight (mean=2.0 kg) than those in the control condition (0.4 kg), with no difference between weekly and monthly weighing subgroups. At 32 weeks, those who remained in the study had lost an average of 3.8 kg, with 54% losing 5% or more of their body weight. An intention-to-treat analysis (based on last-observation-carried-forward) reduced this to 2.6 kg, with 26% achieving a 5% weight loss. This easily disseminable, low-cost, simple intervention produced clinically significant weight loss. In limited resource settings it has potential as a tool for obesity management.

  19. A simple model for the magnetoelectric interaction in multiferroics

    International Nuclear Information System (INIS)

    Filho, Cesar J Calderon; Barberis, Gaston E

    2011-01-01

    The (anti)ferromagnetic and ferroelectric transitions in some multiferroic compounds seem to be strongly correlated. Even for systems that do not show spontaneous ferroelectricity such as the LiMPO 4 (M = Mn, Fe, Co, Ni) compounds, the coupling between magnetic and electric degrees of freedom is evident experimentally. Here, we present a simple numerical calculation to simulate this coupling that leads to the two transitions. We assume a magnetic sublattice consisting of classical magnetic moments coupled to a separated nonmagnetic sublattice consisting of classical electric dipoles. The coupling between them is realized through a phenomenological spin-lattice Hamiltonian, and the solution is obtained using the Monte Carlo technique. In the simplest version, the magnetic system is 2D Ising (anti)ferromagnetic lattice, with nearest neighbors interactions only, and the electric moments are permanent moments, coupled electrically. Within this approximation, the second order magnetic transition induces ferroelectricity in the electric dipoles. We show that these calculations can be extended to other magnetic systems, (x-y model and 3D Heisenberg) and to systems where the electric moments are created by strains, generated via spin-lattice coupling, so the model can be applied to model realistic systems such as the olivines mentioned above.

  20. A simple model for skewed species-lifetime distributions

    KAUST Repository

    Murase, Yohsuke; Shimada, Takashi; Ito, Nobuyasu

    2010-01-01

    A simple model of a biological community assembly is studied. Communities are assembled by successive migrations and extinctions of species. In the model, species are interacting with each other. The intensity of the interaction between each pair

  1. Anharmonicities of coupled β and γ vibrations discussed in a simple model

    International Nuclear Information System (INIS)

    Piepenbring, R.; Silvestre-Brac, B.; Szymanski, Z.

    1984-01-01

    The multiphonon method based on β and γ phonons is tested in a simple model allowing an exact solution for a many body fermion system where pairing and quadrupole forces are acting. The properties exhibiting the anharmonicities of the lowest-lying vibrational states of positive parity are nicely reproduced by this method. (orig.)

  2. Anharmonicities of coupled β and γ vibrations discussed in a simple model

    International Nuclear Information System (INIS)

    Piepenbring, R.; Silvestre-Brac, B.; Szymanski, Z.

    1983-11-01

    The multiphonon method based on β and γ phonons is tested in a simple model allowing an exact solution for a many body fermion system where pairing and quadrupole forces are acting. The properties exhibiting the anharmonicities of the lowest-lying vibrational states of positive parity are nicely reproduced by this method

  3. International Space Station Model Correlation Analysis

    Science.gov (United States)

    Laible, Michael R.; Fitzpatrick, Kristin; Hodge, Jennifer; Grygier, Michael

    2018-01-01

    This paper summarizes the on-orbit structural dynamic data and the related modal analysis, model validation and correlation performed for the International Space Station (ISS) configuration ISS Stage ULF7, 2015 Dedicated Thruster Firing (DTF). The objective of this analysis is to validate and correlate the analytical models used to calculate the ISS internal dynamic loads and compare the 2015 DTF with previous tests. During the ISS configurations under consideration, on-orbit dynamic measurements were collected using the three main ISS instrumentation systems; Internal Wireless Instrumentation System (IWIS), External Wireless Instrumentation System (EWIS) and the Structural Dynamic Measurement System (SDMS). The measurements were recorded during several nominal on-orbit DTF tests on August 18, 2015. Experimental modal analyses were performed on the measured data to extract modal parameters including frequency, damping, and mode shape information. Correlation and comparisons between test and analytical frequencies and mode shapes were performed to assess the accuracy of the analytical models for the configurations under consideration. These mode shapes were also compared to earlier tests. Based on the frequency comparisons, the accuracy of the mathematical models is assessed and model refinement recommendations are given. In particular, results of the first fundamental mode will be discussed, nonlinear results will be shown, and accelerometer placement will be assessed.

  4. The analysis of nonstationary time series using regression, correlation and cointegration

    DEFF Research Database (Denmark)

    Johansen, Søren

    2012-01-01

    There are simple well-known conditions for the validity of regression and correlation as statistical tools. We analyse by examples the effect of nonstationarity on inference using these methods and compare them to model based inference using the cointegrated vector autoregressive model. Finally we...... analyse some monthly data from US on interest rates as an illustration of the methods...

  5. Correlation Structures of Correlated Binomial Models and Implied Default Distribution

    Science.gov (United States)

    Mori, Shintaro; Kitsukawa, Kenji; Hisakado, Masato

    2008-11-01

    We show how to analyze and interpret the correlation structures, the conditional expectation values and correlation coefficients of exchangeable Bernoulli random variables. We study implied default distributions for the iTraxx-CJ tranches and some popular probabilistic models, including the Gaussian copula model, Beta binomial distribution model and long-range Ising model. We interpret the differences in their profiles in terms of the correlation structures. The implied default distribution has singular correlation structures, reflecting the credit market implications. We point out two possible origins of the singular behavior.

  6. Towards the Development of a Second-Order Approximation in Activity Coefficient Models Based on Group Contributions

    DEFF Research Database (Denmark)

    Abildskov, Jens; Constantinou, Leonidas; Gani, Rafiqul

    1996-01-01

    A simple modification of group contribution based models for estimation of liquid phase activity coefficients is proposed. The main feature of this modification is that contributions estimated from the present first-order groups in many instances are found insufficient since the first-order groups...... correlation/prediction capabilities, distinction between isomers and ability to overcome proximity effects....

  7. Simple model of the slingshot effect

    Directory of Open Access Journals (Sweden)

    Gaetano Fiore

    2016-07-01

    Full Text Available We present a detailed quantitative description of the recently proposed “slingshot effect.” Namely, we determine a broad range of conditions under which the impact of a very short and intense laser pulse normally onto a low-density plasma (or matter locally completely ionized into a plasma by the pulse causes the expulsion of a bunch of surface electrons in the direction opposite to the one of propagation of the pulse, and the detailed, ready-for-experiments features of the expelled electrons (energy spectrum, collimation, etc. The effect is due to the combined actions of the ponderomotive force and the huge longitudinal field arising from charge separation. Our predictions are based on estimating 3D corrections to a simple, yet powerful plane 2-fluid magnetohydrodynamic (MHD model where the equations to be solved are reduced to a system of Hamilton equations in one dimension (or a collection of which become autonomous after the pulse has overcome the electrons. Experimental tests seem to be at hand. If confirmed by the latter, the effect would provide a new extraction and acceleration mechanism for electrons, alternative to traditional radio-frequency-based or Laser-Wake-Field ones.

  8. A simple model for determining photoelectron-generated radiation scaling laws

    International Nuclear Information System (INIS)

    Dipp, T.M.

    1993-12-01

    The generation of radiation via photoelectrons induced off of a conducting surface was explored using a simple model to determine fundamental scaling laws. The model is one-dimensional (small-spot) and uses monoenergetic, nonrelativistic photoelectrons emitted normal to the illuminated conducting surface. Simple steady-state radiation, frequency, and maximum orbital distance equations were derived using small-spot radiation equations, a sin 2 type modulation function, and simple photoelectron dynamics. The result is a system of equations for various scaling laws, which, along with model and user constraints, are simultaneously solved using techniques similar to linear programming. Typical conductors illuminated by low-power sources producing photons with energies less than 5.0 eV are readily modeled by this small-spot, steady-state analysis, which shows they generally produce low efficiency (η rsL -10.5 ) pure photoelectron-induced radiation. However, the small-spot theory predicts that the total conversion efficiency for incident photon power to photoelectron-induced radiated power can go higher than 10 -5.5 for typical real conductors if photons having energies of 15 eV and higher are used, and should go even higher still if the small-spot limit of this theory is exceeded as well. Overall, the simple theory equations, model constraint equations, and solution techniques presented provide a foundation for understanding, predicting, and optimizing the generated radiation, and the simple theory equations provide scaling laws to compare with computational and laboratory experimental data

  9. A Simple Geotracer Compositional Correlation Analysis Reveals Oil Charge and Migration Pathways

    Science.gov (United States)

    Yang, Yunlai; Arouri, Khaled

    2016-03-01

    A novel approach, based on geotracer compositional correlation analysis is reported, which reveals the oil charge sequence and migration pathways for five oil fields in Saudi Arabia. The geotracers utilised are carbazoles, a family of neutral pyrrolic nitrogen compounds known to occur naturally in crude oils. The approach is based on the concept that closely related fields, with respect to filling sequence, will show a higher carbazole compositional correlation, than those fields that are less related. That is, carbazole compositional correlation coefficients can quantify the charge and filling relationships among different fields. Consequently, oil migration pathways can be defined based on the established filling relationships. The compositional correlation coefficients of isomers of C1 and C2 carbazoles, and benzo[a]carbazole for all different combination pairs of the five fields were found to vary extremely widely (0.28 to 0.94). A wide range of compositional correlation coefficients allows adequate differentiation of separate filling relationships. Based on the established filling relationships, three distinct migration pathways were inferred, with each apparently being charged from a different part of a common source kitchen. The recognition of these charge and migration pathways will greatly aid the search for new accumulations.

  10. A Simple Geotracer Compositional Correlation Analysis Reveals Oil Charge and Migration Pathways.

    Science.gov (United States)

    Yang, Yunlai; Arouri, Khaled

    2016-03-11

    A novel approach, based on geotracer compositional correlation analysis is reported, which reveals the oil charge sequence and migration pathways for five oil fields in Saudi Arabia. The geotracers utilised are carbazoles, a family of neutral pyrrolic nitrogen compounds known to occur naturally in crude oils. The approach is based on the concept that closely related fields, with respect to filling sequence, will show a higher carbazole compositional correlation, than those fields that are less related. That is, carbazole compositional correlation coefficients can quantify the charge and filling relationships among different fields. Consequently, oil migration pathways can be defined based on the established filling relationships. The compositional correlation coefficients of isomers of C1 and C2 carbazoles, and benzo[a]carbazole for all different combination pairs of the five fields were found to vary extremely widely (0.28 to 0.94). A wide range of compositional correlation coefficients allows adequate differentiation of separate filling relationships. Based on the established filling relationships, three distinct migration pathways were inferred, with each apparently being charged from a different part of a common source kitchen. The recognition of these charge and migration pathways will greatly aid the search for new accumulations.

  11. Correlation Structures of Correlated Binomial Models and Implied Default Distribution

    OpenAIRE

    S. Mori; K. Kitsukawa; M. Hisakado

    2006-01-01

    We show how to analyze and interpret the correlation structures, the conditional expectation values and correlation coefficients of exchangeable Bernoulli random variables. We study implied default distributions for the iTraxx-CJ tranches and some popular probabilistic models, including the Gaussian copula model, Beta binomial distribution model and long-range Ising model. We interpret the differences in their profiles in terms of the correlation structures. The implied default distribution h...

  12. Theoretical comparison of performance using transfer functions for reactivity meters based on inverse kinetic method and simple feedback method

    International Nuclear Information System (INIS)

    Shimazu, Yoichiro; Tashiro, Shoichi; Tojo, Masayuki

    2017-01-01

    The performance of two digital reactivity meters, one based on the conventional inverse kinetic method and the other one based on simple feedback theory, are compared analytically using their respective transfer functions. The latter one is proposed by one of the authors. It has been shown that the performance of the two reactivity meters become almost identical when proper system parameters are selected for each reactivity meter. A new correlation between the system parameters of the two reactivity meters is found. With this correlation, filter designers can easily determine the system parameters for the respective reactivity meters to obtain identical performance. (author)

  13. Comparative Study for Evaluation of Mass Flow Rate for Simple Solar Still and Active with Heat Pump

    Directory of Open Access Journals (Sweden)

    Hidouri Khaoula

    2017-07-01

    Full Text Available In isolated and arid areas, especially in the almost Maghreb regions, the abundant solar radiation intensity along the year and the available brackish water resources are the two favorable conditions for using solar desalination technology to produce fresh water. The present study is based on the use of three groups of correlation, for evaluating mass transfer. Theoretical results are compared with those obtained experimentally for a Simple Solar Distiller (SSD and a Simple Solar Distiller Hybrid with a Heat Pump (SSDHP stills. Experimental results and those calculated by Lewis number correlation show good agreements. Results obtained by Dunkle, Kumar and Tiwari correlations are not satisfactory with the experimental ones. Theoretical results, as well as statistical analysis, are presented. The model with heat pump ( for two configurations (111 and (001 give more output compared with the model without heat pump ((000 and (110. This results where agree for the use of the statistic results, the error it less with Lewis number as compared with the different correlation.

  14. Bacterial genomes lacking long-range correlations may not be modeled by low-order Markov chains: the role of mixing statistics and frame shift of neighboring genes.

    Science.gov (United States)

    Cocho, Germinal; Miramontes, Pedro; Mansilla, Ricardo; Li, Wentian

    2014-12-01

    We examine the relationship between exponential correlation functions and Markov models in a bacterial genome in detail. Despite the well known fact that Markov models generate sequences with correlation function that decays exponentially, simply constructed Markov models based on nearest-neighbor dimer (first-order), trimer (second-order), up to hexamer (fifth-order), and treating the DNA sequence as being homogeneous all fail to predict the value of exponential decay rate. Even reading-frame-specific Markov models (both first- and fifth-order) could not explain the fact that the exponential decay is very slow. Starting with the in-phase coding-DNA-sequence (CDS), we investigated correlation within a fixed-codon-position subsequence, and in artificially constructed sequences by packing CDSs with out-of-phase spacers, as well as altering CDS length distribution by imposing an upper limit. From these targeted analyses, we conclude that the correlation in the bacterial genomic sequence is mainly due to a mixing of heterogeneous statistics at different codon positions, and the decay of correlation is due to the possible out-of-phase between neighboring CDSs. There are also small contributions to the correlation from bases at the same codon position, as well as by non-coding sequences. These show that the seemingly simple exponential correlation functions in bacterial genome hide a complexity in correlation structure which is not suitable for a modeling by Markov chain in a homogeneous sequence. Other results include: use of the (absolute value) second largest eigenvalue to represent the 16 correlation functions and the prediction of a 10-11 base periodicity from the hexamer frequencies. Copyright © 2014 Elsevier Ltd. All rights reserved.

  15. Improved CORF model of simple cell combined with non-classical receptive field and its application on edge detection

    Science.gov (United States)

    Sun, Xiao; Chai, Guobei; Liu, Wei; Bao, Wenzhuo; Zhao, Xiaoning; Ming, Delie

    2018-02-01

    Simple cells in primary visual cortex are believed to extract local edge information from a visual scene. In this paper, inspired by different receptive field properties and visual information flow paths of neurons, an improved Combination of Receptive Fields (CORF) model combined with non-classical receptive fields was proposed to simulate the responses of simple cell's receptive fields. Compared to the classical model, the proposed model is able to better imitate simple cell's physiologic structure with consideration of facilitation and suppression of non-classical receptive fields. And on this base, an edge detection algorithm as an application of the improved CORF model was proposed. Experimental results validate the robustness of the proposed algorithm to noise and background interference.

  16. A Summary of Interfacial Heat Transfer Models and Correlations

    Energy Technology Data Exchange (ETDEWEB)

    Bae, Sung Won; Cho, Hyung Kyu; Lee, Young Jin; Kim, Hee Chul; Jung, Young Jong; Kim, K. D. [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2007-10-15

    A long term project has been launched in October 2006 to develop a plant safety analysis code. 5 organizations are joining together for the harmonious coworking to build up the code. In this project, KAERI takes the charge of the building up the physical models and correlations about the transport phenomena. The momentum and energy transfer terms as well as the mass are surveyed from the RELAP5/MOD3, RELAP5-3D, CATHARE, and TRAC-M does. Also the recent papers are surveyed. Among these resources, most of the CATHARE models are based on their own experiment and test results. Thus, the CATHARE models are only used as the comparison purposes. In this paper, a summary of the models and the correlations about the interfacial heat transfer are represented. These surveyed models and correlations will be tested numerically and one correlation is selected finally.

  17. Correlations of serum levels of leptin and other related factor (NPY, ADP) in female children with simple obesity

    International Nuclear Information System (INIS)

    Bai Hua; Wei Chunlei; Qian Mingzhu

    2008-01-01

    Objective: To study the changes of serum levels of leptin, NPY and ADP in female children with simple obesity. Methods: Serum levels of leptin, NPY and ADP were measured with radioimmunoassay (RIA) in 32 female children with simple obesity and 35 controls. Results: The serum levels of leptin, NPY were significantly higher in the obese children than those in controls (P<0.01), while the serum levels of ADP were significantly lower (P<0.01). Serum leptin levels were significantly positively correlated (r=0.6014, P<0.01) with NPY levels but were negatively correlated (r=-0.4786, P<0.01) with adiponectin (ADP) levels. Conclusion: Determination of serum leptin, NPY and ADP levels is of help for judgement of degree of obesity as wen as outcome prediction in female children. (authors)

  18. Competition and fragmentation: a simple model generating lognormal-like distributions

    International Nuclear Information System (INIS)

    Schwaemmle, V; Queiros, S M D; Brigatti, E; Tchumatchenko, T

    2009-01-01

    The current distribution of language size in terms of speaker population is generally described using a lognormal distribution. Analyzing the original real data we show how the double-Pareto lognormal distribution can give an alternative fit that indicates the existence of a power law tail. A simple Monte Carlo model is constructed based on the processes of competition and fragmentation. The results reproduce the power law tails of the real distribution well and give better results for a poorly connected topology of interactions.

  19. Fluctuation correlation models for receptor immobilization

    Science.gov (United States)

    Fourcade, B.

    2017-12-01

    Nanoscale dynamics with cycles of receptor diffusion and immobilization by cell-external-or-internal factors is a key process in living cell adhesion phenomena at the origin of a plethora of signal transduction pathways. Motivated by modern correlation microscopy approaches, the receptor correlation functions in physical models based on diffusion-influenced reaction is studied. Using analytical and stochastic modeling, this paper focuses on the hybrid regime where diffusion and reaction are not truly separable. The time receptor autocorrelation functions are shown to be indexed by different time scales and their asymptotic expansions are given. Stochastic simulations show that this analysis can be extended to situations with a small number of molecules. It is also demonstrated that this analysis applies when receptor immobilization is coupled to environmental noise.

  20. Simple Models Create Steep Learning Curves in Academic Design Studio

    DEFF Research Database (Denmark)

    Hansen, Peter Lundsgaard; Dam, Torben; Le Goffic, Virginie Corinne

    2014-01-01

    theory positions normally regarded as mutually incompatible. The method is the result of years of ‘trial and error’ design studio teaching at the University of Copenhagen, triangulated with academic design theory research. Research based design studio teaching poses a fundamental pedagogical challenge......, as it must combine skill-based design practice with academic-explicated theories and methods. The vehicle in the development of the simple model method is overcoming the challenge of ensuring that a group of students with various backgrounds and cultures can produce specific and consistent design proposals...... helps the students work with and understand design as both a product and a process....

  1. Unifying Model-Based and Reactive Programming within a Model-Based Executive

    Science.gov (United States)

    Williams, Brian C.; Gupta, Vineet; Norvig, Peter (Technical Monitor)

    1999-01-01

    Real-time, model-based, deduction has recently emerged as a vital component in AI's tool box for developing highly autonomous reactive systems. Yet one of the current hurdles towards developing model-based reactive systems is the number of methods simultaneously employed, and their corresponding melange of programming and modeling languages. This paper offers an important step towards unification. We introduce RMPL, a rich modeling language that combines probabilistic, constraint-based modeling with reactive programming constructs, while offering a simple semantics in terms of hidden state Markov processes. We introduce probabilistic, hierarchical constraint automata (PHCA), which allow Markov processes to be expressed in a compact representation that preserves the modularity of RMPL programs. Finally, a model-based executive, called Reactive Burton is described that exploits this compact encoding to perform efficIent simulation, belief state update and control sequence generation.

  2. What Can We Learn from a Simple Physics-Based Earthquake Simulator?

    Science.gov (United States)

    Artale Harris, Pietro; Marzocchi, Warner; Melini, Daniele

    2018-03-01

    Physics-based earthquake simulators are becoming a popular tool to investigate on the earthquake occurrence process. So far, the development of earthquake simulators is commonly led by the approach "the more physics, the better". However, this approach may hamper the comprehension of the outcomes of the simulator; in fact, within complex models, it may be difficult to understand which physical parameters are the most relevant to the features of the seismic catalog at which we are interested. For this reason, here, we take an opposite approach and analyze the behavior of a purposely simple earthquake simulator applied to a set of California faults. The idea is that a simple simulator may be more informative than a complex one for some specific scientific objectives, because it is more understandable. Our earthquake simulator has three main components: the first one is a realistic tectonic setting, i.e., a fault data set of California; the second is the application of quantitative laws for earthquake generation on each single fault, and the last is the fault interaction modeling through the Coulomb Failure Function. The analysis of this simple simulator shows that: (1) the short-term clustering can be reproduced by a set of faults with an almost periodic behavior, which interact according to a Coulomb failure function model; (2) a long-term behavior showing supercycles of the seismic activity exists only in a markedly deterministic framework, and quickly disappears introducing a small degree of stochasticity on the recurrence of earthquakes on a fault; (3) faults that are strongly coupled in terms of Coulomb failure function model are synchronized in time only in a marked deterministic framework, and as before, such a synchronization disappears introducing a small degree of stochasticity on the recurrence of earthquakes on a fault. Overall, the results show that even in a simple and perfectly known earthquake occurrence world, introducing a small degree of

  3. A simple data loss model for positron camera systems

    International Nuclear Information System (INIS)

    Eriksson, L.; Dahlbom, M.

    1994-01-01

    A simple model to describe data losses in PET cameras is presented. The model is not intended to be used primarily for dead time corrections in existing scanners, although this is possible. Instead the model is intended to be used for data simulations in order to determine the figures of merit of future camera systems, based on data handling state-of-art solutions. The model assumes the data loss to be factorized into two components, one describing the detector or block-detector performance and the other the remaining data handling such as coincidence determinations, data transfer and data storage. Two modern positron camera systems have been investigated in terms of this model. These are the Siemens-CTI ECAT EXACT and ECAT EXACT HR systems, which both have an axial field-of-view (FOV) of about 15 cm. They both have retractable septa and can acquire data from the whole volume within the FOV and can reconstruct volume image data. An example is given how to use the model for live time calculation in a futuristic large axial FOV cylindrical system

  4. Simple Models for Airport Delays During Transition to a Trajectory-Based Air Traffic System

    Science.gov (United States)

    Brooker, Peter

    It is now widely recognised that a paradigm shift in air traffic control concepts is needed. This requires state-of-the-art innovative technologies, making much better use of the information in the air traffic management (ATM) system. These paradigm shifts go under the names of NextGen in the USA and SESAR in Europe, which inter alia will make dramatic changes to the nature of airport operations. A vital part of moving from an existing system to a new paradigm is the operational implications of the transition process. There would be business incentives for early aircraft fitment, it is generally safer to introduce new technologies gradually, and researchers are already proposing potential transition steps to the new system. Simple queuing theory models are used to establish rough quantitative estimates of the impact of the transition to a more efficient time-based navigational and ATM system. Such models are approximate, but they do offer insight into the broad implications of system change and its significant features. 4D-equipped aircraft in essence have a contract with the airport runway and, in return, they would get priority over any other aircraft waiting for use of the runway. The main operational feature examined here is the queuing delays affecting non-4D-equipped arrivals. These get a reasonable service if the proportion of 4D-equipped aircraft is low, but this can deteriorate markedly for high proportions, and be economically unviable. Preventative measures would be to limit the additional growth of 4D-equipped flights and/or to modify their contracts to provide sufficient space for the non-4D-equipped flights to operate without excessive delays. There is a potential for non-Poisson models, for which there is little in the literature, and for more complex models, e.g. grouping a succession of 4D-equipped aircraft as a batch.

  5. The Analysis of Nonstationary Time Series Using Regression, Correlation and Cointegration

    Directory of Open Access Journals (Sweden)

    Søren Johansen

    2012-06-01

    Full Text Available There are simple well-known conditions for the validity of regression and correlation as statistical tools. We analyse by examples the effect of nonstationarity on inference using these methods and compare them to model based inference using the cointegrated vector autoregressive model. Finally we analyse some monthly data from US on interest rates as an illustration of the methods.

  6. Optical-Correlator Neural Network Based On Neocognitron

    Science.gov (United States)

    Chao, Tien-Hsin; Stoner, William W.

    1994-01-01

    Multichannel optical correlator implements shift-invariant, high-discrimination pattern-recognizing neural network based on paradigm of neocognitron. Selected as basic building block of this neural network because invariance under shifts is inherent advantage of Fourier optics included in optical correlators in general. Neocognitron is conceptual electronic neural-network model for recognition of visual patterns. Multilayer processing achieved by iteratively feeding back output of feature correlator to input spatial light modulator and updating Fourier filters. Neural network trained by use of characteristic features extracted from target images. Multichannel implementation enables parallel processing of large number of selected features.

  7. A simple approach to ignoring irrelevant variables by population decoding based on multisensory neurons

    Science.gov (United States)

    Kim, HyungGoo R.; Pitkow, Xaq; Angelaki, Dora E.

    2016-01-01

    Sensory input reflects events that occur in the environment, but multiple events may be confounded in sensory signals. For example, under many natural viewing conditions, retinal image motion reflects some combination of self-motion and movement of objects in the world. To estimate one stimulus event and ignore others, the brain can perform marginalization operations, but the neural bases of these operations are poorly understood. Using computational modeling, we examine how multisensory signals may be processed to estimate the direction of self-motion (i.e., heading) and to marginalize out effects of object motion. Multisensory neurons represent heading based on both visual and vestibular inputs and come in two basic types: “congruent” and “opposite” cells. Congruent cells have matched heading tuning for visual and vestibular cues and have been linked to perceptual benefits of cue integration during heading discrimination. Opposite cells have mismatched visual and vestibular heading preferences and are ill-suited for cue integration. We show that decoding a mixed population of congruent and opposite cells substantially reduces errors in heading estimation caused by object motion. In addition, we present a general formulation of an optimal linear decoding scheme that approximates marginalization and can be implemented biologically by simple reinforcement learning mechanisms. We also show that neural response correlations induced by task-irrelevant variables may greatly exceed intrinsic noise correlations. Overall, our findings suggest a general computational strategy by which neurons with mismatched tuning for two different sensory cues may be decoded to perform marginalization operations that dissociate possible causes of sensory inputs. PMID:27334948

  8. A simple oblique dip model for geomagnetic micropulsations

    Directory of Open Access Journals (Sweden)

    J. A. Lawrie

    Full Text Available It is pointed out that simple models adopted so far have tended to neglect the obliquity of the magnetic field lines entering the Earth's surface. A simple alternative model is presented, in which the ambient field lines are straight, but enter wedge shaped boundaries at half a right-angle. The model is illustrated by assuming an axially symmetric, compressional, impulse type disturbance at the outer boundary, all other boundaries being assumed to be perfectly conducting. The numerical method used is checked from the instant the excitation ceases, by an analytical method. The first harmonic along field lines is found to be of noticeable size, but appears to be mainly due to coupling with the fundamental, and with the first harmonic across field lines.

    Key words. Magnetospheric physics (MHD waves and instabilities.

  9. Additive N-step Markov chains as prototype model of symbolic stochastic dynamical systems with long-range correlations

    International Nuclear Information System (INIS)

    Mayzelis, Z.A.; Apostolov, S.S.; Melnyk, S.S.; Usatenko, O.V.; Yampol'skii, V.A.

    2007-01-01

    A theory of symbolic dynamic systems with long-range correlations based on the consideration of the binary N-step Markov chains developed earlier in Phys Rev Lett 2003;90:110601 is generalized to the biased case (non-equal numbers of zeros and unities in the chain). In the model, the conditional probability that the ith symbol in the chain equals zero (or unity) is a linear function of the number of unities (zeros) among the preceding N symbols. The correlation and distribution functions as well as the variance of number of symbols in the words of arbitrary length L are obtained analytically and verified by numerical simulations. A self-similarity of the studied stochastic process is revealed and the similarity group transformation of the chain parameters is presented. The diffusion Fokker-Planck equation governing the distribution function of the L-words is explored. If the persistent correlations are not extremely strong, the distribution function is shown to be the Gaussian with the variance being nonlinearly dependent on L. An equation connecting the memory and correlation function of the additive Markov chain is presented. This equation allows reconstructing a memory function using a correlation function of the system. Effectiveness and robustness of the proposed method is demonstrated by simple model examples. Memory functions of concrete coarse-grained literary texts are found and their universal power-law behavior at long distances is revealed

  10. Additive N-step Markov chains as prototype model of symbolic stochastic dynamical systems with long-range correlations

    Energy Technology Data Exchange (ETDEWEB)

    Mayzelis, Z.A. [Department of Physics, Kharkov National University, 4 Svoboda Sq., Kharkov 61077 (Ukraine); Apostolov, S.S. [Department of Physics, Kharkov National University, 4 Svoboda Sq., Kharkov 61077 (Ukraine); Melnyk, S.S. [A. Ya. Usikov Institute for Radiophysics and Electronics, Ukrainian Academy of Science, 12 Proskura Street, 61085 Kharkov (Ukraine); Usatenko, O.V. [A. Ya. Usikov Institute for Radiophysics and Electronics, Ukrainian Academy of Science, 12 Proskura Street, 61085 Kharkov (Ukraine)]. E-mail: usatenko@ire.kharkov.ua; Yampol' skii, V.A. [A. Ya. Usikov Institute for Radiophysics and Electronics, Ukrainian Academy of Science, 12 Proskura Street, 61085 Kharkov (Ukraine)

    2007-10-15

    A theory of symbolic dynamic systems with long-range correlations based on the consideration of the binary N-step Markov chains developed earlier in Phys Rev Lett 2003;90:110601 is generalized to the biased case (non-equal numbers of zeros and unities in the chain). In the model, the conditional probability that the ith symbol in the chain equals zero (or unity) is a linear function of the number of unities (zeros) among the preceding N symbols. The correlation and distribution functions as well as the variance of number of symbols in the words of arbitrary length L are obtained analytically and verified by numerical simulations. A self-similarity of the studied stochastic process is revealed and the similarity group transformation of the chain parameters is presented. The diffusion Fokker-Planck equation governing the distribution function of the L-words is explored. If the persistent correlations are not extremely strong, the distribution function is shown to be the Gaussian with the variance being nonlinearly dependent on L. An equation connecting the memory and correlation function of the additive Markov chain is presented. This equation allows reconstructing a memory function using a correlation function of the system. Effectiveness and robustness of the proposed method is demonstrated by simple model examples. Memory functions of concrete coarse-grained literary texts are found and their universal power-law behavior at long distances is revealed.

  11. A simple model for calculating air pollution within street canyons

    Science.gov (United States)

    Venegas, Laura E.; Mazzeo, Nicolás A.; Dezzutti, Mariana C.

    2014-04-01

    This paper introduces the Semi-Empirical Urban Street (SEUS) model. SEUS is a simple mathematical model based on the scaling of air pollution concentration inside street canyons employing the emission rate, the width of the canyon, the dispersive velocity scale and the background concentration. Dispersive velocity scale depends on turbulent motions related to wind and traffic. The parameterisations of these turbulent motions include two dimensionless empirical parameters. Functional forms of these parameters have been obtained from full scale data measured in street canyons at four European cities. The sensitivity of SEUS model is studied analytically. Results show that relative errors in the evaluation of the two dimensionless empirical parameters have less influence on model uncertainties than uncertainties in other input variables. The model estimates NO2 concentrations using a simple photochemistry scheme. SEUS is applied to estimate NOx and NO2 hourly concentrations in an irregular and busy street canyon in the city of Buenos Aires. The statistical evaluation of results shows that there is a good agreement between estimated and observed hourly concentrations (e.g. fractional bias are -10.3% for NOx and +7.8% for NO2). The agreement between the estimated and observed values has also been analysed in terms of its dependence on wind speed and direction. The model shows a better performance for wind speeds >2 m s-1 than for lower wind speeds and for leeward situations than for others. No significant discrepancies have been found between the results of the proposed model and that of a widely used operational dispersion model (OSPM), both using the same input information.

  12. The Structured Intuitive Model for Product Line Economics (SIMPLE)

    National Research Council Canada - National Science Library

    Clements, Paul C; McGregor, John D; Cohen, Sholom G

    2005-01-01

    .... This report presents the Structured Intuitive Model of Product Line Economics (SIMPLE), a general-purpose business model that supports the estimation of the costs and benefits in a product line development organization...

  13. Estimation of a simple agent-based model of financial markets: An application to Australian stock and foreign exchange data

    Science.gov (United States)

    Alfarano, Simone; Lux, Thomas; Wagner, Friedrich

    2006-10-01

    Following Alfarano et al. [Estimation of agent-based models: the case of an asymmetric herding model, Comput. Econ. 26 (2005) 19-49; Excess volatility and herding in an artificial financial market: analytical approach and estimation, in: W. Franz, H. Ramser, M. Stadler (Eds.), Funktionsfähigkeit und Stabilität von Finanzmärkten, Mohr Siebeck, Tübingen, 2005, pp. 241-254], we consider a simple agent-based model of a highly stylized financial market. The model takes Kirman's ant process [A. Kirman, Epidemics of opinion and speculative bubbles in financial markets, in: M.P. Taylor (Ed.), Money and Financial Markets, Blackwell, Cambridge, 1991, pp. 354-368; A. Kirman, Ants, rationality, and recruitment, Q. J. Econ. 108 (1993) 137-156] of mimetic contagion as its starting point, but allows for asymmetry in the attractiveness of both groups. Embedding the contagion process into a standard asset-pricing framework, and identifying the abstract groups of the herding model as chartists and fundamentalist traders, a market with periodic bubbles and bursts is obtained. Taking stock of the availability of a closed-form solution for the stationary distribution of returns for this model, we can estimate its parameters via maximum likelihood. Expanding our earlier work, this paper presents pertinent estimates for the Australian dollar/US dollar exchange rate and the Australian stock market index. As it turns out, our model indicates dominance of fundamentalist behavior in both the stock and foreign exchange market.

  14. Analyzing C2 Structures and Self-Synchronization with Simple Computational Models

    Science.gov (United States)

    2011-06-01

    16th ICCRTS “Collective C2 in Multinational Civil-Military Operations” Analyzing C2 Structures and Self- Synchronization with Simple...Self- Synchronization with Simple Computational Models 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT...models. The Kuramoto Model, though with some serious limitations, provides a representation of information flow and self- synchronization in an

  15. A heat transfer correlation based on a surface renewal model for molten core concrete interaction study

    International Nuclear Information System (INIS)

    Tourniaire, B. . E-mail bruno.tourniaire@cea.fr

    2006-01-01

    The prediction of heat transfer between corium pool and concrete basemat is of particular significance in the framework of the study of PWR's severe accident. Heat transfer directly governs the ablation velocity of concrete in case of molten core concrete interaction (MCCI) and, consequently, the time delay when the reactor cavity may fail. From a restricted hydrodynamic point of view, this issue is related to heat transfer between a heated bubbling pool and a porous wall with gas injection. Several experimental studies have been performed with simulant materials and many correlations have been provided to address this issue. The comparisons of the results of these correlations with the measurements and their extrapolation to reactor materials show that strong discrepancies between the results of these models are obtained which probably means that some phenomena are not well taken into account. The main purpose of this paper is to present an alternative heat transfer model which was originally developed for chemical engineering applications (bubble columns) by Deckwer. A part of this work is devoted to the presentation of this model, which is based on a surface renewal assumption. Comparison of the results of this model with available experimental data in different systems are presented and discussed. These comparisons clearly show that this model can be used to deal with the particular problem of MCCI. The analyses also lead to enrich the original model by taking into account the thermal resistance of the wall: a new formulation of the Deckwer's correlation is finally proposed

  16. Energy economy in the actomyosin interaction: lessons from simple models.

    Science.gov (United States)

    Lehman, Steven L

    2010-01-01

    The energy economy of the actomyosin interaction in skeletal muscle is both scientifically fascinating and practically important. This chapter demonstrates how simple cross-bridge models have guided research regarding the energy economy of skeletal muscle. Parameter variation on a very simple two-state strain-dependent model shows that early events in the actomyosin interaction strongly influence energy efficiency, and late events determine maximum shortening velocity. Addition of a weakly-bound state preceding force production allows weak coupling of cross-bridge mechanics and ATP turnover, so that a simple three-state model can simulate the velocity-dependence of ATP turnover. Consideration of the limitations of this model leads to a review of recent evidence regarding the relationship between ligand binding states, conformational states, and macromolecular structures of myosin cross-bridges. Investigation of the fine structure of the actomyosin interaction during the working stroke continues to inform fundamental research regarding the energy economy of striated muscle.

  17. A simple non-linear model of immune response

    International Nuclear Information System (INIS)

    Gutnikov, Sergei; Melnikov, Yuri

    2003-01-01

    It is still unknown why the adaptive immune response in the natural immune system based on clonal proliferation of lymphocytes requires interaction of at least two different cell types with the same antigen. We present a simple mathematical model illustrating that the system with separate types of cells for antigen recognition and patogen destruction provides more robust adaptive immunity than the system where just one cell type is responsible for both recognition and destruction. The model is over-simplified as we did not have an intention of describing the natural immune system. However, our model provides a tool for testing the proposed approach through qualitative analysis of the immune system dynamics in order to construct more sophisticated models of the immune systems that exist in the living nature. It also opens a possibility to explore specific features of highly non-linear dynamics in nature-inspired computational paradigms like artificial immune systems and immunocomputing . We expect this paper to be of interest not only for mathematicians but also for biologists; therefore we made effort to explain mathematics in sufficient detail for readers without professional mathematical background

  18. Simple classical model for Fano statistics in radiation detectors

    Energy Technology Data Exchange (ETDEWEB)

    Jordan, David V. [Pacific Northwest National Laboratory, National Security Division - Radiological and Chemical Sciences Group PO Box 999, Richland, WA 99352 (United States)], E-mail: David.Jordan@pnl.gov; Renholds, Andrea S.; Jaffe, John E.; Anderson, Kevin K.; Rene Corrales, L.; Peurrung, Anthony J. [Pacific Northwest National Laboratory, National Security Division - Radiological and Chemical Sciences Group PO Box 999, Richland, WA 99352 (United States)

    2008-02-01

    A simple classical model that captures the essential statistics of energy partitioning processes involved in the creation of information carriers (ICs) in radiation detectors is presented. The model pictures IC formation from a fixed amount of deposited energy in terms of the statistically analogous process of successively sampling water from a large, finite-volume container ('bathtub') with a small dipping implement ('shot or whiskey glass'). The model exhibits sub-Poisson variance in the distribution of the number of ICs generated (the 'Fano effect'). Elementary statistical analysis of the model clarifies the role of energy conservation in producing the Fano effect and yields Fano's prescription for computing the relative variance of the IC number distribution in terms of the mean and variance of the underlying, single-IC energy distribution. The partitioning model is applied to the development of the impact ionization cascade in semiconductor radiation detectors. It is shown that, in tandem with simple assumptions regarding the distribution of energies required to create an (electron, hole) pair, the model yields an energy-independent Fano factor of 0.083, in accord with the lower end of the range of literature values reported for silicon and high-purity germanium. The utility of this simple picture as a diagnostic tool for guiding or constraining more detailed, 'microscopic' physical models of detector material response to ionizing radiation is discussed.

  19. A simple analytical model for dynamics of time-varying target leverage ratios

    Science.gov (United States)

    Lo, C. F.; Hui, C. H.

    2012-03-01

    In this paper we have formulated a simple theoretical model for the dynamics of the time-varying target leverage ratio of a firm under some assumptions based upon empirical observations. In our theoretical model the time evolution of the target leverage ratio of a firm can be derived self-consistently from a set of coupled Ito's stochastic differential equations governing the leverage ratios of an ensemble of firms by the nonlinear Fokker-Planck equation approach. The theoretically derived time paths of the target leverage ratio bear great resemblance to those used in the time-dependent stationary-leverage (TDSL) model [Hui et al., Int. Rev. Financ. Analy. 15, 220 (2006)]. Thus, our simple model is able to provide a theoretical foundation for the selected time paths of the target leverage ratio in the TDSL model. We also examine how the pace of the adjustment of a firm's target ratio, the volatility of the leverage ratio and the current leverage ratio affect the dynamics of the time-varying target leverage ratio. Hence, with the proposed dynamics of the time-dependent target leverage ratio, the TDSL model can be readily applied to generate the default probabilities of individual firms and to assess the default risk of the firms.

  20. A simple and accurate rule-based modeling framework for simulation of autocrine/paracrine stimulation of glioblastoma cell motility and proliferation by L1CAM in 2-D culture.

    Science.gov (United States)

    Caccavale, Justin; Fiumara, David; Stapf, Michael; Sweitzer, Liedeke; Anderson, Hannah J; Gorky, Jonathan; Dhurjati, Prasad; Galileo, Deni S

    2017-12-11

    Glioblastoma multiforme (GBM) is a devastating brain cancer for which there is no known cure. Its malignancy is due to rapid cell division along with high motility and invasiveness of cells into the brain tissue. Simple 2-dimensional laboratory assays (e.g., a scratch assay) commonly are used to measure the effects of various experimental perturbations, such as treatment with chemical inhibitors. Several mathematical models have been developed to aid the understanding of the motile behavior and proliferation of GBM cells. However, many are mathematically complicated, look at multiple interdependent phenomena, and/or use modeling software not freely available to the research community. These attributes make the adoption of models and simulations of even simple 2-dimensional cell behavior an uncommon practice by cancer cell biologists. Herein, we developed an accurate, yet simple, rule-based modeling framework to describe the in vitro behavior of GBM cells that are stimulated by the L1CAM protein using freely available NetLogo software. In our model L1CAM is released by cells to act through two cell surface receptors and a point of signaling convergence to increase cell motility and proliferation. A simple graphical interface is provided so that changes can be made easily to several parameters controlling cell behavior, and behavior of the cells is viewed both pictorially and with dedicated graphs. We fully describe the hierarchical rule-based modeling framework, show simulation results under several settings, describe the accuracy compared to experimental data, and discuss the potential usefulness for predicting future experimental outcomes and for use as a teaching tool for cell biology students. It is concluded that this simple modeling framework and its simulations accurately reflect much of the GBM cell motility behavior observed experimentally in vitro in the laboratory. Our framework can be modified easily to suit the needs of investigators interested in other

  1. Design, implementation and verification of software code for radiation dose assessment based on simple generic environmental model

    International Nuclear Information System (INIS)

    I Putu Susila; Arif Yuniarto

    2017-01-01

    Radiation dose assessment to determine the potential of radiological impacts of various installations within nuclear facility complex is necessary to ensure environmental and public safety. A simple generic model-based method for calculating radiation doses caused by the release of radioactive substances into the environment has been published by the International Atomic Energy Agency (IAEA) as the Safety Report Series No. 19 (SRS-19). In order to assist the application of the assessment method and a basis for the development of more complex assessment methods, an open-source based software code has been designed and implemented. The software comes with maps and is very easy to be used because assessment scenarios can be done through diagrams. Software verification was performed by comparing its result to SRS-19 and CROM software calculation results. Dose estimated by SRS-19 are higher compared to the result of developed software. However, these are still acceptable since dose estimation in SRS-19 is based on conservative approach. On the other hand, compared to CROM software, the same results for three scenarios and a non-significant difference of 2.25 % in another scenario were obtained. These results indicate the correctness of our implementation and implies that the developed software is ready for use in real scenario. In the future, the addition of various features and development of new model need to be done to improve the capability of software that has been developed. (author)

  2. Reveal quantum correlation in complementary bases

    OpenAIRE

    Shengjun Wu; Zhihao Ma; Zhihua Chen; Sixia Yu

    2014-01-01

    An essential feature of genuine quantum correlation is the simultaneous existence of correlation in complementary bases. We reveal this feature of quantum correlation by defining measures based on invariance under a basis change. For a bipartite quantum state, the classical correlation is the maximal correlation present in a certain optimum basis, while the quantum correlation is characterized as a series of residual correlations in the mutually unbiased bases. Compared with other approaches ...

  3. A simple polarized-based diffused reflectance colour imaging system

    African Journals Online (AJOL)

    A simple polarized-based diffuse reflectance imaging system has been developed. The system is designed for both in vivo and in vitro imaging of agricultural specimen in the visible region. The system uses a commercial web camera and a halogen lamp that makes it relatively simple and less expensive for diagnostic ...

  4. Zebrafish as a correlative and predictive model for assessing biomaterial nanotoxicity.

    Science.gov (United States)

    Fako, Valerie E; Furgeson, Darin Y

    2009-06-21

    The lack of correlative and predictive models to assess acute and chronic toxicities limits the rapid pre-clinical development of new therapeutics. This barrier is due in part to the exponential growth of nanotechnology and nanotherapeutics, coupled with the lack of rigorous and robust screening assays and putative standards. It is a fairly simple and cost-effective process to initially screen the toxicity of a nanomaterial by using invitro cell cultures; unfortunately it is nearly impossible to imitate a complimentary invivo system. Small mammalian models are the most common method used to assess possible toxicities and biodistribution of nanomaterials in humans. Alternatively, Daniorerio, commonly known as zebrafish, are proving to be a quick, cheap, and facile model to conservatively assess toxicity of nanomaterials.

  5. Modelling conditional correlations of asset returns: A smooth transition approach

    DEFF Research Database (Denmark)

    Silvennoinen, Annastiina; Teräsvirta, Timo

    In this paper we propose a new multivariate GARCH model with time-varying conditional correlation structure. The time-varying conditional correlations change smoothly between two extreme states of constant correlations according to a predetermined or exogenous transition variable. An LM-test is d......In this paper we propose a new multivariate GARCH model with time-varying conditional correlation structure. The time-varying conditional correlations change smoothly between two extreme states of constant correlations according to a predetermined or exogenous transition variable. An LM......-test is derived to test the constancy of correlations and LM- and Wald tests to test the hypothesis of partially constant correlations. Analytical expressions for the test statistics and the required derivatives are provided to make computations feasible. An empirical example based on daily return series of ve...

  6. A Spalart-Allmaras local correlation-based transition model for Thermo-fuid dynamics

    Science.gov (United States)

    D'Alessandro, V.; Garbuglia, F.; Montelpare, S.; Zoppi, A.

    2017-11-01

    The study of innovative energy systems often involves complex fluid flows problems and the Computational Fluid-Dynamics (CFD) is one of the main tools of analysis. It is important to put in evidence that in several energy systems the flow field experiences the laminar-to-turbulent transition. Direct Numerical Simulations (DNS) or Large Eddy Simulation (LES) are able to predict the flow transition but they are still inapplicable to the study of real problems due to the significant computational resources requirements. Differently standard Reynolds Averaged Navier Stokes (RANS) approaches are not always reliable since they assume a fully turbulent regime. In order to overcome this drawback in the recent years some locally formulated transition RANS models have been developed. In this work, we present a local correlation-based transition approach adding two equations that control the laminar-toturbulent transition process -γ and \\[\\overset{}{\\mathop{{{\\operatorname{Re}}θ, \\text{t}}}} \\] - to the well-known Spalart-Allmaras (SA) turbulence model. The new model was implemented within OpenFOAM code. The energy equation is also implemented in order to evaluate the model performance in thermal-fluid dynamics applications. In all the considered cases a very good agreement between numerical and experimental data was observed.

  7. Ab initio optimization principle for the ground states of translationally invariant strongly correlated quantum lattice models.

    Science.gov (United States)

    Ran, Shi-Ju

    2016-05-01

    In this work, a simple and fundamental numeric scheme dubbed as ab initio optimization principle (AOP) is proposed for the ground states of translational invariant strongly correlated quantum lattice models. The idea is to transform a nondeterministic-polynomial-hard ground-state simulation with infinite degrees of freedom into a single optimization problem of a local function with finite number of physical and ancillary degrees of freedom. This work contributes mainly in the following aspects: (1) AOP provides a simple and efficient scheme to simulate the ground state by solving a local optimization problem. Its solution contains two kinds of boundary states, one of which play the role of the entanglement bath that mimics the interactions between a supercell and the infinite environment, and the other gives the ground state in a tensor network (TN) form. (2) In the sense of TN, a novel decomposition named as tensor ring decomposition (TRD) is proposed to implement AOP. Instead of following the contraction-truncation scheme used by many existing TN-based algorithms, TRD solves the contraction of a uniform TN in an opposite way by encoding the contraction in a set of self-consistent equations that automatically reconstruct the whole TN, making the simulation simple and unified; (3) AOP inherits and develops the ideas of different well-established methods, including the density matrix renormalization group (DMRG), infinite time-evolving block decimation (iTEBD), network contractor dynamics, density matrix embedding theory, etc., providing a unified perspective that is previously missing in this fields. (4) AOP as well as TRD give novel implications to existing TN-based algorithms: A modified iTEBD is suggested and the two-dimensional (2D) AOP is argued to be an intrinsic 2D extension of DMRG that is based on infinite projected entangled pair state. This paper is focused on one-dimensional quantum models to present AOP. The benchmark is given on a transverse Ising

  8. Two simple models of classical heat pumps.

    Science.gov (United States)

    Marathe, Rahul; Jayannavar, A M; Dhar, Abhishek

    2007-03-01

    Motivated by recent studies of models of particle and heat quantum pumps, we study similar simple classical models and examine the possibility of heat pumping. Unlike many of the usual ratchet models of molecular engines, the models we study do not have particle transport. We consider a two-spin system and a coupled oscillator system which exchange heat with multiple heat reservoirs and which are acted upon by periodic forces. The simplicity of our models allows accurate numerical and exact solutions and unambiguous interpretation of results. We demonstrate that while both our models seem to be built on similar principles, one is able to function as a heat pump (or engine) while the other is not.

  9. Stylized facts in social networks: Community-based static modeling

    Science.gov (United States)

    Jo, Hang-Hyun; Murase, Yohsuke; Török, János; Kertész, János; Kaski, Kimmo

    2018-06-01

    The past analyses of datasets of social networks have enabled us to make empirical findings of a number of aspects of human society, which are commonly featured as stylized facts of social networks, such as broad distributions of network quantities, existence of communities, assortative mixing, and intensity-topology correlations. Since the understanding of the structure of these complex social networks is far from complete, for deeper insight into human society more comprehensive datasets and modeling of the stylized facts are needed. Although the existing dynamical and static models can generate some stylized facts, here we take an alternative approach by devising a community-based static model with heterogeneous community sizes and larger communities having smaller link density and weight. With these few assumptions we are able to generate realistic social networks that show most stylized facts for a wide range of parameters, as demonstrated numerically and analytically. Since our community-based static model is simple to implement and easily scalable, it can be used as a reference system, benchmark, or testbed for further applications.

  10. Simple models for studying complex spatiotemporal patterns of animal behavior

    Science.gov (United States)

    Tyutyunov, Yuri V.; Titova, Lyudmila I.

    2017-06-01

    Minimal mathematical models able to explain complex patterns of animal behavior are essential parts of simulation systems describing large-scale spatiotemporal dynamics of trophic communities, particularly those with wide-ranging species, such as occur in pelagic environments. We present results obtained with three different modelling approaches: (i) an individual-based model of animal spatial behavior; (ii) a continuous taxis-diffusion-reaction system of partial-difference equations; (iii) a 'hybrid' approach combining the individual-based algorithm of organism movements with explicit description of decay and diffusion of the movement stimuli. Though the models are based on extremely simple rules, they all allow description of spatial movements of animals in a predator-prey system within a closed habitat, reproducing some typical patterns of the pursuit-evasion behavior observed in natural populations. In all three models, at each spatial position the animal movements are determined by local conditions only, so the pattern of collective behavior emerges due to self-organization. The movement velocities of animals are proportional to the density gradients of specific cues emitted by individuals of the antagonistic species (pheromones, exometabolites or mechanical waves of the media, e.g., sound). These cues play a role of taxis stimuli: prey attract predators, while predators repel prey. Depending on the nature and the properties of the movement stimulus we propose using either a simplified individual-based model, a continuous taxis pursuit-evasion system, or a little more detailed 'hybrid' approach that combines simulation of the individual movements with the continuous model describing diffusion and decay of the stimuli in an explicit way. These can be used to improve movement models for many species, including large marine predators.

  11. A NEW SIMPLE DYNAMO MODEL FOR STELLAR ACTIVITY CYCLE

    Energy Technology Data Exchange (ETDEWEB)

    Yokoi, N.; Hamba, F. [Institute of Industrial Science, University of Tokyo, Tokyo 153-8505 (Japan); Schmitt, D. [Max-Planck Institut für Sonnensystemforschung, Göttingen D-37077 (Germany); Pipin, V., E-mail: nobyokoi@iis.u-tokyo.ac.jp [Institute of Solar–Terrestrial Physics, Russian Academy of Science, Irkutsk 664033 (Russian Federation)

    2016-06-20

    A new simple dynamo model for stellar activity cycle is proposed. By considering an inhomogeneous flow effect on turbulence, it is shown that turbulent cross helicity (velocity–magnetic-field correlation) enters the expression of turbulent electromotive force as the coupling coefficient for the mean absolute vorticity. This makes the present model different from the current α –Ω-type models in two main ways. First, in addition to the usual helicity ( α ) and turbulent magnetic diffusivity ( β ) effects, we consider the cross-helicity effect as a key ingredient of the dynamo process. Second, the spatiotemporal evolution of cross helicity is solved simultaneously with the mean magnetic fields. The basic scenario is as follows. In the presence of turbulent cross helicity, the toroidal field is induced by the toroidal rotation. Then, as in usual models, the α effect generates the poloidal field from the toroidal one. This induced poloidal field produces a turbulent cross helicity whose sign is opposite to the original one (negative production). With this cross helicity of the reversed sign, a reversal in field configuration starts. Eigenvalue analyses of the simplest possible model give a butterfly diagram, which confirms the above scenario and the equatorward migrations, the phase relationship between the cross helicity and magnetic fields. These results suggest that the oscillation of the turbulent cross helicity is a key for the activity cycle. The reversal of the cross helicity is not the result of the magnetic-field reversal, but the cause of the latter. This new model is expected to open up the possibility of the mean-field or turbulence closure dynamo approaches.

  12. Overall feature of EAST operation space by using simple Core-SOL-Divertor model

    International Nuclear Information System (INIS)

    Hiwatari, R.; Hatayama, A.; Zhu, S.; Takizuka, T.; Tomita, Y.

    2005-01-01

    We have developed a simple Core-SOL-Divertor (C-S-D) model to investigate qualitatively the overall features of the operational space for the integrated core and edge plasma. To construct the simple C-S-D model, a simple core plasma model of ITER physics guidelines and a two-point SOL-divertor model are used. The simple C-S-D model is applied to the study of the EAST operational space with lower hybrid current drive experiments under various kinds of trade-off for the basic plasma parameters. Effective methods for extending the operation space are also presented. As shown by this study for the EAST operation space, it is evident that the C-S-D model is a useful tool to understand qualitatively the overall features of the plasma operation space. (author)

  13. Study of nuclear medium effects on the effective interaction based on the one-boson exchange model

    International Nuclear Information System (INIS)

    Nakayama, K.

    1985-02-01

    In this work, starting from a realistic nucleon-nucleon interaction based on the one-boson exchange model for the nuclear force, we attempted a microscopic derivation of the effective interaction which may be appropriate for nuclear structure as well as for nucleon-nucleus scattering problems. Short-range correlations and medium polarization as well as relativistic effects on both particle-hole and Δ-hole interactions have been investigated. For the nucleon-nucleon case short-range correlations are basically restricted to S-states and affect mainly the central components of the effective interaction. In contrast, the Δ-nucleon interaction is essentially unaffected by short-range correlations due to the Pauli principle restrictions and the momentum mismatch between the central components of the correlation operator and the tensor component of the bare transition potential. Based on these analyses it is shown that short-range correlation effects can be summarized in a very simple correlation operator. (orig./HSI)

  14. Simple deterministic model of the hydraulic buffer effect in septic tanks

    OpenAIRE

    Forquet, N.; Dufresne, M.

    2015-01-01

    Septic tanks are widely used in on-site wastewater treatment systems. In addition to anaerobic pre-treatment, hydraulic buffering is one of the roles attributed to septic tanks. However there is still no tool for assessing it, especially in dynamic conditions. For gravity fed system, it could help both researchers and system designers. This technical note reports a simple mechanistic model based on the assumption of flow transition between the septic tank and the outflow pipe. The only parame...

  15. Limitations of correlation-based redatuming methods

    Science.gov (United States)

    Barrera P, D. F.; Schleicher, J.; van der Neut, J.

    2017-12-01

    Redatuming aims to correct seismic data for the consequences of an acquisition far from the target. That includes the effects of an irregular acquisition surface and of complex geological structures in the overburden such as strong lateral heterogeneities or layers with low or very high velocity. Interferometric techniques can be used to relocate sources to positions where only receivers are available and have been used to move acquisition geometries to the ocean bottom or transform data between surface-seismic and vertical seismic profiles. Even if no receivers are available at the new datum, the acquisition system can be relocated to any datum in the subsurface to which the propagation of waves can be modeled with sufficient accuracy. By correlating the modeled wavefield with seismic surface data, one can carry the seismic acquisition geometry from the surface closer to geologic horizons of interest. Specifically, we show the derivation and approximation of the one-sided seismic interferometry equation for surface-data redatuming, conveniently using Green’s theorem for the Helmholtz equation with density variation. Our numerical examples demonstrate that correlation-based single-boundary redatuming works perfectly in a homogeneous overburden. If the overburden is inhomogeneous, primary reflections from deeper interfaces are still repositioned with satisfactory accuracy. However, in this case artifacts are generated as a consequence of incorrectly redatumed overburden multiples. These artifacts get even worse if the complete wavefield is used instead of the direct wavefield. Therefore, we conclude that correlation-based interferometric redatuming of surface-seismic data should always be applied using direct waves only, which can be approximated with sufficient quality if a smooth velocity model for the overburden is available.

  16. Simple model systems: a challenge for Alzheimer's disease

    Directory of Open Access Journals (Sweden)

    Di Carlo Marta

    2012-04-01

    Full Text Available Abstract The success of biomedical researches has led to improvement in human health and increased life expectancy. An unexpected consequence has been an increase of age-related diseases and, in particular, neurodegenerative diseases. These disorders are generally late onset and exhibit complex pathologies including memory loss, cognitive defects, movement disorders and death. Here, it is described as the use of simple animal models such as worms, fishes, flies, Ascidians and sea urchins, have facilitated the understanding of several biochemical mechanisms underlying Alzheimer's disease (AD, one of the most diffuse neurodegenerative pathologies. The discovery of specific genes and proteins associated with AD, and the development of new technologies for the production of transgenic animals, has helped researchers to overcome the lack of natural models. Moreover, simple model systems of AD have been utilized to obtain key information for evaluating potential therapeutic interventions and for testing efficacy of putative neuroprotective compounds.

  17. HIV Treatment and Prevention: A Simple Model to Determine Optimal Investment.

    Science.gov (United States)

    Juusola, Jessie L; Brandeau, Margaret L

    2016-04-01

    To create a simple model to help public health decision makers determine how to best invest limited resources in HIV treatment scale-up and prevention. A linear model was developed for determining the optimal mix of investment in HIV treatment and prevention, given a fixed budget. The model incorporates estimates of secondary health benefits accruing from HIV treatment and prevention and allows for diseconomies of scale in program costs and subadditive benefits from concurrent program implementation. Data sources were published literature. The target population was individuals infected with HIV or at risk of acquiring it. Illustrative examples of interventions include preexposure prophylaxis (PrEP), community-based education (CBE), and antiretroviral therapy (ART) for men who have sex with men (MSM) in the US. Outcome measures were incremental cost, quality-adjusted life-years gained, and HIV infections averted. Base case analysis indicated that it is optimal to invest in ART before PrEP and to invest in CBE before scaling up ART. Diseconomies of scale reduced the optimal investment level. Subadditivity of benefits did not affect the optimal allocation for relatively low implementation levels. The sensitivity analysis indicated that investment in ART before PrEP was optimal in all scenarios tested. Investment in ART before CBE became optimal when CBE reduced risky behavior by 4% or less. Limitations of the study are that dynamic effects are approximated with a static model. Our model provides a simple yet accurate means of determining optimal investment in HIV prevention and treatment. For MSM in the US, HIV control funds should be prioritized on inexpensive, effective programs like CBE, then on ART scale-up, with only minimal investment in PrEP. © The Author(s) 2015.

  18. A temperature dependent simple spice based modeling platform for power IGBT modules

    NARCIS (Netherlands)

    Sfakianakis, G.; Nawaz, M.; Chimento, F.

    2014-01-01

    This paper deals with the development of a PSpice based temperature dependent modelling platform for the evaluation of silicon based IGBT power modules. The developed device modelling platform is intended to be used for the design and assessment of converter valves/cells for potential high power

  19. Development and Evaluation of a Simple, Multifactorial Model Based on Landing Performance to Indicate Injury Risk in Surfing Athletes.

    Science.gov (United States)

    Lundgren, Lina E; Tran, Tai T; Nimphius, Sophia; Raymond, Ellen; Secomb, Josh L; Farley, Oliver R L; Newton, Robert U; Steele, Julie R; Sheppard, Jeremy M

    2015-11-01

    To develop and evaluate a multifactorial model based on landing performance to estimate injury risk for surfing athletes. Five measures were collected from 78 competitive surfing athletes and used to create a model to serve as a screening tool for landing tasks and potential injury risk. In the second part of the study, the model was evaluated using junior surfing athletes (n = 32) with a longitudinal follow-up of their injuries over 26 wk. Two models were compared based on the collected data, and magnitude-based inferences were applied to determine the likelihood of differences between injured and noninjured groups. The study resulted in a model based on 5 measures--ankle-dorsiflexion range of motion, isometric midthigh-pull lower-body strength, time to stabilization during a drop-and-stick (DS) landing, relative peak force during a DS landing, and frontal-plane DS-landing video analysis--for male and female professional surfers and male and female junior surfers. Evaluation of the model showed that a scaled probability score was more likely to detect injuries in junior surfing athletes and reported a correlation of r = .66, P = .001, with a model of equal variable importance. The injured (n = 7) surfers had a lower probability score (0.18 ± 0.16) than the noninjured group (n = 25, 0.36 ± 0.15), with 98% likelihood, Cohen d = 1.04. The proposed model seems sensitive and easy to implement and interpret. Further research is recommended to show full validity for potential adaptations for other sports.

  20. A simple analytical infiltration model for short-duration rainfall

    Science.gov (United States)

    Wang, Kaiwen; Yang, Xiaohua; Liu, Xiaomang; Liu, Changming

    2017-12-01

    Many infiltration models have been proposed to simulate infiltration process. Different initial soil conditions and non-uniform initial water content can lead to infiltration simulation errors, especially for short-duration rainfall (SHR). Few infiltration models are specifically derived to eliminate the errors caused by the complex initial soil conditions. We present a simple analytical infiltration model for SHR infiltration simulation, i.e., Short-duration Infiltration Process model (SHIP model). The infiltration simulated by 5 models (i.e., SHIP (high) model, SHIP (middle) model, SHIP (low) model, Philip model and Parlange model) were compared based on numerical experiments and soil column experiments. In numerical experiments, SHIP (middle) and Parlange models had robust solutions for SHR infiltration simulation of 12 typical soils under different initial soil conditions. The absolute values of percent bias were less than 12% and the values of Nash and Sutcliffe efficiency were greater than 0.83. Additionally, in soil column experiments, infiltration rate fluctuated in a range because of non-uniform initial water content. SHIP (high) and SHIP (low) models can simulate an infiltration range, which successfully covered the fluctuation range of the observed infiltration rate. According to the robustness of solutions and the coverage of fluctuation range of infiltration rate, SHIP model can be integrated into hydrologic models to simulate SHR infiltration process and benefit the flood forecast.

  1. Operator content of the critical Potts model in d dimensions and logarithmic correlations

    International Nuclear Information System (INIS)

    Vasseur, Romain; Jacobsen, Jesper Lykke

    2014-01-01

    Using the symmetric group S Q symmetry of the Q-state Potts model, we classify the (scalar) operator content of its underlying field theory in arbitrary dimension. In addition to the usual identity, energy and magnetization operators, we find fields that generalize the N-cluster operators well-known in two dimensions, together with their subleading counterparts. We give the explicit form of all these operators – up to non-universal constants – both on the lattice and in the continuum limit for the Landau theory. We compute exactly their two- and three-point correlation functions on an arbitrary graph in terms of simple probabilities, and give the general form of these correlation functions in the continuum limit at the critical point. Specializing to integer values of the parameter Q, we argue that the analytic continuation of the S Q symmetry yields logarithmic correlations at the critical point in arbitrary dimension, thus implying a mixing of some scaling fields by the scale transformation generator. All these logarithmic correlation functions are given a clear geometrical meaning, which can be checked in numerical simulations. Several physical examples are discussed, including bond percolation, spanning trees and forests, resistor networks and the Ising model. We also briefly address the generalization of our approach to the O(n) model

  2. Data-driven outbreak forecasting with a simple nonlinear growth model.

    Science.gov (United States)

    Lega, Joceline; Brown, Heidi E

    2016-12-01

    Recent events have thrown the spotlight on infectious disease outbreak response. We developed a data-driven method, EpiGro, which can be applied to cumulative case reports to estimate the order of magnitude of the duration, peak and ultimate size of an ongoing outbreak. It is based on a surprisingly simple mathematical property of many epidemiological data sets, does not require knowledge or estimation of disease transmission parameters, is robust to noise and to small data sets, and runs quickly due to its mathematical simplicity. Using data from historic and ongoing epidemics, we present the model. We also provide modeling considerations that justify this approach and discuss its limitations. In the absence of other information or in conjunction with other models, EpiGro may be useful to public health responders. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.

  3. Use of a Simple GIS-Based Model in Mapping the Atmospheric Concentration of γ-HCH in Europe

    Directory of Open Access Journals (Sweden)

    Pilar Vizcaino

    2014-10-01

    Full Text Available The state-of-the-art of atmospheric contaminant transport modeling provides accurate estimation of chemical concentrations. However, existing complex models, sophisticated in terms of process description and potentially highly accurate, may entail expensive setups and require very detailed input data. In contexts where detailed predictions are not needed (e.g., for regulatory risk assessment or life cycle impact assessment of chemicals, simple models allowing quick evaluation of contaminants may be preferable. The goal of this paper is to illustrate and critically discuss the use of a simple equation proposed by Pistocchi and Galmarini (2010, which can be implemented through basic GIS functions, to predict atmospheric concentrations of lindane (γ-HCH in Europe from both local and remote sources. Concentrations were computed for 1995 and 2005 assuming different modes of use of lindane and consequently different spatial patterns of emissions. Results were compared with those from the well-established MSCE-POP model (2005 developed within EMEP (European Monitoring and Evaluation Programme, and with available monitoring data, showing acceptable correspondence in terms of the orders of magnitude and spatial distribution of concentrations, especially when the background effect of emissions from extracontinental sources, estimated using the same equation, is added to European emissions.

  4. Calculation of coulomb correlation potential in a turbulent non-ideal plasma with reduced degrees of freedom

    International Nuclear Information System (INIS)

    Dwivedi, C.B.; Bhattacharjee, M.

    1998-01-01

    A simple but reasonable physical model has been developed to find out the correlation potential in a turbulent non-ideal plasma. It is assumed that the turbulent plasma state comprises of weakly interacting pseudo particles i.e. nonlinear coherent structures like solitons with random distribution in space and time. The calculation is based on the lowest order binary interacting model of the nonlinear normal modes (pseudo particles) of the weakly correlated plasmas. Its implication in the phase transition of the correlated Coulomb gas is discussed. (author)

  5. A two-scale model for correlation between B cell VDJ usage in zebrafish

    Science.gov (United States)

    Pan, Keyao; Deem, Michael

    2011-03-01

    The zebrafish (Danio rerio) is one of the model animals for study of immunology. The dynamics of the adaptive immune system in zebrafish is similar to that in higher animals. In this work, we built a two-scale model to simulate the dynamics of B cells in primary and secondary immune reactions in zebrafish and to explain the reported correlation between VDJ usage of B cell repertoires in distinct zebrafish. The first scale of the model consists of a generalized NK model to simulate the B cell maturation process in the 10-day primary immune response. The second scale uses a delay ordinary differential equation system to model the immune responses in the 6-month lifespan of zebrafish. The generalized NK model shows that mature B cells specific to one antigen mostly possess a single VDJ recombination. The probability that mature B cells in two zebrafish have the same VDJ recombination increases with the B cell population size or the B cell selection intensity and decreases with the B cell hypermutation rate. The ODE model shows a distribution of correlation in the VDJ usage of the B cell repertoires in two six-month-old zebrafish that is highly similar to that from experiment. This work presents a simple theory to explain the experimentally observed correlation in VDJ usage of distinct zebrafish B cell repertoires after an immune response.

  6. A simple analytical model of single-event upsets in bulk CMOS

    Energy Technology Data Exchange (ETDEWEB)

    Sogoyan, Armen V.; Chumakov, Alexander I.; Smolin, Anatoly A., E-mail: aasmol@spels.ru; Ulanova, Anastasia V.; Boruzdina, Anna B.

    2017-06-01

    During the last decade, multiple new methods of single event upset (SEU) rate prediction for aerospace systems have been proposed. Despite different models and approaches being employed in these methods, they all share relatively high usage complexity and require information about a device that is not always available to an end user. This work presents an alternative approach to estimating SEU cross-section as a function of linear energy transfer (LET) that can be further developed into a method of SEU rate prediction. The goal is to propose a simple, yet physics-based, approach with just two parameters that can be used even in situations when only a process node of the device is known. The developed approach is based on geometrical interpretation of SEU cross-section and an analytical solution to the diffusion problem obtained for a simplified IC topology model. A good fit of the model to the experimental data encompassing 7 generations of SRAMs is demonstrated.

  7. A simple analytical model of single-event upsets in bulk CMOS

    International Nuclear Information System (INIS)

    Sogoyan, Armen V.; Chumakov, Alexander I.; Smolin, Anatoly A.; Ulanova, Anastasia V.; Boruzdina, Anna B.

    2017-01-01

    During the last decade, multiple new methods of single event upset (SEU) rate prediction for aerospace systems have been proposed. Despite different models and approaches being employed in these methods, they all share relatively high usage complexity and require information about a device that is not always available to an end user. This work presents an alternative approach to estimating SEU cross-section as a function of linear energy transfer (LET) that can be further developed into a method of SEU rate prediction. The goal is to propose a simple, yet physics-based, approach with just two parameters that can be used even in situations when only a process node of the device is known. The developed approach is based on geometrical interpretation of SEU cross-section and an analytical solution to the diffusion problem obtained for a simplified IC topology model. A good fit of the model to the experimental data encompassing 7 generations of SRAMs is demonstrated.

  8. Simple standard problem for the Preisach moving model

    International Nuclear Information System (INIS)

    Morentin, F.J.; Alejos, O.; Francisco, C. de; Munoz, J.M.; Hernandez-Gomez, P.; Torres, C.

    2004-01-01

    The present work proposes a simple magnetic system as a candidate for a Standard Problem for Preisach-based models. The system consists in a regular square array of magnetic particles totally oriented along the direction of application of an external magnetic field. The behavior of such system was numerically simulated for different values of the interaction between particles and of the standard deviation of the critical fields of the particles. The characteristic parameters of the Preisach moving model were worked out during simulations, i.e., the mean value and the standard deviation of the interaction field. For this system, results reveal that the mean interaction field depends linearly on the system magnetization, as the Preisach moving model predicts. Nevertheless, the standard deviation cannot be considered as independent of the magnetization. In fact, the standard deviation shows a maximum at demagnetization and two minima at magnetization saturation. Furthermore, not all the demagnetization states are equivalent. The plot standard deviation vs. magnetization is a multi-valuated curve when the system undergoes an AC demagnetization procedure. In this way, the standard deviation increases as the system goes from coercivity to the AC demagnetized state

  9. A simple mechanical model for the isotropic harmonic oscillator

    International Nuclear Information System (INIS)

    Nita, Gelu M

    2010-01-01

    A constrained elastic pendulum is proposed as a simple mechanical model for the isotropic harmonic oscillator. The conceptual and mathematical simplicity of this model recommends it as an effective pedagogical tool in teaching basic physics concepts at advanced high school and introductory undergraduate course levels.

  10. Design and Use of the Simple Event Model (SEM)

    NARCIS (Netherlands)

    van Hage, W.R.; Malaisé, V.; Segers, R.H.; Hollink, L.

    2011-01-01

    Events have become central elements in the representation of data from domains such as history, cultural heritage, multimedia and geography. The Simple Event Model (SEM) is created to model events in these various domains, without making assumptions about the domain-specific vocabularies used. SEM

  11. Analysis of pre-service physics teacher skills designing simple physics experiments based technology

    Science.gov (United States)

    Susilawati; Huda, C.; Kurniawan, W.; Masturi; Khoiri, N.

    2018-03-01

    Pre-service physics teacher skill in designing simple experiment set is very important in adding understanding of student concept and practicing scientific skill in laboratory. This study describes the skills of physics students in designing simple experiments based technologicall. The experimental design stages include simple tool design and sensor modification. The research method used is descriptive method with the number of research samples 25 students and 5 variations of simple physics experimental design. Based on the results of interviews and observations obtained the results of pre-service physics teacher skill analysis in designing simple experimental physics charged technology is good. Based on observation result, pre-service physics teacher skill in designing simple experiment is good while modification and sensor application are still not good. This suggests that pre-service physics teacher still need a lot of practice and do experiments in designing physics experiments using sensor modifications. Based on the interview result, it is found that students have high enough motivation to perform laboratory activities actively and students have high curiosity to be skilled at making simple practicum tool for physics experiment.

  12. A SIMPLE EXPERIMENTAL MODEL OF HEAT SHOCK RESPONSE IN RATS

    Directory of Open Access Journals (Sweden)

    Tufi Neder Meyer

    1998-10-01

    Full Text Available Objective: To obtain a simple model for the elicitation of the heat shock response in rats. Design: Laboratory study. Setting: University research laboratories. Sample: Seventy-nine adult male albino rats (weight range 200 g to 570 g. Procedures: Exposure to heat stress by heating animals in a warm bath for 5 min after their rectal temperatures reached 107.60 F (420 C. Liver and lung samples were collected for heat-shock protein 70 (HSP70 detection (Western analysis. Results: Western analysis was positive for HSP70 in the liver and in the lungs of heated animals. There was a temporal correlation between heating and HSP70 detection: it was strongest 1 day after heating and reduced afterwards. No heated animals died. Conclusion: These data show that heating rats in a warm (45o C bath, according to parameters set in this model, elicits efficiently the heat shock response.OBJETIVO: Obter um modelo simples para tentar esclarecer a resposta ao choque térmico em ratos. LOCAL: Laboratório de pesquisa da Universidade. MÉTODO: Amostra: 79 ratos albinos, adultos, entre 200g a 570g. Procedimentos: Exposição ao calor, em banho quente, por 5 minutos, após a temperatura retal chegar a 42 graus centigrados. Biópsias de fígado e pulmão foram obtidas para detectar a proteina 70 (HSP 70, pelo "Western blot". RESULTADOS: As análises foram positivas nos animais aquecidos, com uma correlação entre aquecimento e constatação da HSP 70. Foi mais elevada no primeiro dia e não houve óbitos nos animais aquecidos. CONCLUSÃO: Os ratos aquecidos a 45 graus centígrados respondem eficientemente ao choque térmico.

  13. Flexible Bayesian Dynamic Modeling of Covariance and Correlation Matrices

    KAUST Repository

    Lan, Shiwei

    2017-11-08

    Modeling covariance (and correlation) matrices is a challenging problem due to the large dimensionality and positive-definiteness constraint. In this paper, we propose a novel Bayesian framework based on decomposing the covariance matrix into variance and correlation matrices. The highlight is that the correlations are represented as products of vectors on unit spheres. We propose a variety of distributions on spheres (e.g. the squared-Dirichlet distribution) to induce flexible prior distributions for covariance matrices that go beyond the commonly used inverse-Wishart prior. To handle the intractability of the resulting posterior, we introduce the adaptive $\\\\Delta$-Spherical Hamiltonian Monte Carlo. We also extend our structured framework to dynamic cases and introduce unit-vector Gaussian process priors for modeling the evolution of correlation among multiple time series. Using an example of Normal-Inverse-Wishart problem, a simulated periodic process, and an analysis of local field potential data (collected from the hippocampus of rats performing a complex sequence memory task), we demonstrated the validity and effectiveness of our proposed framework for (dynamic) modeling covariance and correlation matrices.

  14. Multi-Criteria Decision Making For Determining A Simple Model of Supplier Selection

    Science.gov (United States)

    Harwati

    2017-06-01

    Supplier selection is a decision with many criteria. Supplier selection model usually involves more than five main criteria and more than 10 sub-criteria. In fact many model includes more than 20 criteria. Too many criteria involved in supplier selection models sometimes make it difficult to apply in many companies. This research focuses on designing supplier selection that easy and simple to be applied in the company. Analytical Hierarchy Process (AHP) is used to weighting criteria. The analysis results there are four criteria that are easy and simple can be used to select suppliers: Price (weight 0.4) shipment (weight 0.3), quality (weight 0.2) and services (weight 0.1). A real case simulation shows that simple model provides the same decision with a more complex model.

  15. A simple model for super critical fluid extraction of bio oils from biomass

    International Nuclear Information System (INIS)

    Patel, Rajesh N.; Bandyopadhyay, Santanu; Ganesh, Anuradda

    2011-01-01

    A simple mathematical model to characterize the supercritical extraction process has been proposed in this paper. This model is primarily based on two mass transfer mechanisms: solubility and diffusion. The model assumes two districts mode of extraction: initial constant rate extraction that is controlled by solubility and falling rate extraction that is controlled by diffusivity. Effects of extraction parameters such as pressure and temperature on the extraction of oil have also been studied. The proposed model, when compared with existing models, shows better agreement with the experimental results. The proposed model developed has been applied for both high initial oil content material (cashew nut shells) and low initial oil content material (black pepper).

  16. Simple anthropometric measures correlate with metabolic risk indicators as strongly as magnetic resonance imaging-measured adipose tissue depots in both HIV-infected and control subjects.

    Science.gov (United States)

    Scherzer, Rebecca; Shen, Wei; Bacchetti, Peter; Kotler, Donald; Lewis, Cora E; Shlipak, Michael G; Heymsfield, Steven B; Grunfeld, Carl

    2008-06-01

    Studies in persons without HIV infection have compared percentage body fat (%BF) and waist circumference as markers of risk for the complications of excess adiposity, but only limited study has been conducted in HIV-infected subjects. We compared anthropometric and magnetic resonance imaging (MRI)-based adiposity measures as correlates of metabolic complications of adiposity in HIV-infected and control subjects. The study was a cross-sectional analysis of 666 HIV-positive and 242 control subjects in the Fat Redistribution and Metabolic Change in HIV Infection (FRAM) study assessing body mass index (BMI), waist (WC) and hip (HC) circumferences, waist-to-hip ratio (WHR), %BF, and MRI-measured regional adipose tissue. Study outcomes were 3 metabolic risk variables [homeostatic model assessment (HOMA), triglycerides, and HDL cholesterol]. Analyses were stratified by sex and HIV status and adjusted for demographic, lifestyle, and HIV-related factors. In HIV-infected and control subjects, univariate associations with HOMA, triglycerides, and HDL were strongest for WC, MRI-measured visceral adipose tissue, and WHR; in all cases, differences in correlation between the strongest measures for each outcome were small (r HDL, WC appeared to be the best anthropometric correlate of metabolic complications, whereas, for triglycerides, the best was WHR. Relations of simple anthropometric measures with HOMA, triglycerides, and HDL cholesterol are approximately as strong as MRI-measured whole-body adipose tissue depots in both HIV-infected and control subjects.

  17. Importance analysis for models with correlated variables and its sparse grid solution

    International Nuclear Information System (INIS)

    Li, Luyi; Lu, Zhenzhou

    2013-01-01

    For structural models involving correlated input variables, a novel interpretation for variance-based importance measures is proposed based on the contribution of the correlated input variables to the variance of the model output. After the novel interpretation of the variance-based importance measures is compared with the existing ones, two solutions of the variance-based importance measures of the correlated input variables are built on the sparse grid numerical integration (SGI): double-loop nested sparse grid integration (DSGI) method and single loop sparse grid integration (SSGI) method. The DSGI method solves the importance measure by decreasing the dimensionality of the input variables procedurally, while SSGI method performs importance analysis through extending the dimensionality of the inputs. Both of them can make full use of the advantages of the SGI, and are well tailored for different situations. By analyzing the results of several numerical and engineering examples, it is found that the novel proposed interpretation about the importance measures of the correlated input variables is reasonable, and the proposed methods for solving importance measures are efficient and accurate. -- Highlights: •The contribution of correlated variables to the variance of the output is analyzed. •A novel interpretation for variance-based indices of correlated variables is proposed. •Two solutions for variance-based importance measures of correlated variables are built

  18. A Conceptually Simple Modeling Approach for Jason-1 Sea State Bias Correction Based on 3 Parameters Exclusively Derived from Altimetric Information

    Directory of Open Access Journals (Sweden)

    Nelson Pires

    2016-07-01

    Full Text Available A conceptually simple formulation is proposed for a new empirical sea state bias (SSB model using information retrieved entirely from altimetric data. Nonparametric regression techniques are used, based on penalized smoothing splines adjusted to each predictor and then combined by a Generalized Additive Model. In addition to the significant wave height (SWH and wind speed (U10, a mediator parameter designed by the mean wave period derived from radar altimetry, has proven to improve the model performance in explaining some of the SSB variability, especially in swell ocean regions with medium-high SWH and low U10. A collinear analysis of scaled sea level anomalies (SLA variance differences shows conformity between the proposed model and the established SSB models. The new formulation aims to be a fast, reliable and flexible SSB model, in line with the well-settled SSB corrections, depending exclusively on altimetric information. The suggested method is computationally efficient and capable of generating a stable model with a small training dataset, a useful feature for forthcoming missions.

  19. Trends in hydrodesulfurization catalysis based on realistic surface models

    DEFF Research Database (Denmark)

    Moses, P.G.; Grabow, L.C.; Fernandez Sanchez, Eva

    2014-01-01

    elementary reactions in HDS of thiophene. These linear correlations are used to develop a simple kinetic model, which qualitatively describes experimental trends in activity. The kinetic model identifies the HS-binding energy as a descriptor of HDS activity. This insight contributes to understanding...... the effect of promotion and structure-activity relationships. Graphical Abstract: [Figure not available: see fulltext.] © 2014 Springer Science+Business Media New York....

  20. A simple branching model that reproduces language family and language population distributions

    Science.gov (United States)

    Schwämmle, Veit; de Oliveira, Paulo Murilo Castro

    2009-07-01

    Human history leaves fingerprints in human languages. Little is known about language evolution and its study is of great importance. Here we construct a simple stochastic model and compare its results to statistical data of real languages. The model is based on the recent finding that language changes occur independently of the population size. We find agreement with the data additionally assuming that languages may be distinguished by having at least one among a finite, small number of different features. This finite set is also used in order to define the distance between two languages, similarly to linguistics tradition since Swadesh.

  1. Mathematical modeling of high-rate Anammox UASB reactor based on granular packing patterns

    International Nuclear Information System (INIS)

    Tang, Chong-Jian; He, Rui; Zheng, Ping; Chai, Li-Yuan; Min, Xiao-Bo

    2013-01-01

    Highlights: ► A novel model was conducted to estimate volumetric nitrogen conversion rates. ► The packing patterns of the granules in Anammox reactor are investigated. ► The simple cubic packing pattern was simulated in high-rate Anammox UASB reactor. ► Operational strategies concerning sludge concentration were proposed by the modeling. -- Abstract: A novel mathematical model was developed to estimate the volumetric nitrogen conversion rates of a high-rate Anammox UASB reactor based on the packing patterns of granular sludge. A series of relationships among granular packing density, sludge concentration, hydraulic retention time and volumetric conversion rate were constructed to correlate Anammox reactor performance with granular packing patterns. It was suggested that the Anammox granules packed as the equivalent simple cubic pattern in high-rate UASB reactor with packing density of 50–55%, which not only accommodated a high concentration of sludge inside the reactor, but also provided large pore volume, thus prolonging the actual substrate conversion time. Results also indicated that it was necessary to improve Anammox reactor performance by enhancing substrate loading when sludge concentration was higher than 37.8 gVSS/L. The established model was carefully calibrated and verified, and it well simulated the performance of granule-based high-rate Anammox UASB reactor

  2. Mathematical modeling of high-rate Anammox UASB reactor based on granular packing patterns

    Energy Technology Data Exchange (ETDEWEB)

    Tang, Chong-Jian, E-mail: chjtangzju@yahoo.com.cn [Department of Environmental Engineering, School of Metallurgical Science and Engineering, Central South University, Changsha 410083 (China); National Engineering Research Center for Control and Treatment of Heavy Metal Pollution, Changsha 410083 (China); He, Rui; Zheng, Ping [Department of Environmental Engineering, Zhejiang University, Zijingang Campus, Hangzhou 310058 (China); Chai, Li-Yuan; Min, Xiao-Bo [Department of Environmental Engineering, School of Metallurgical Science and Engineering, Central South University, Changsha 410083 (China); National Engineering Research Center for Control and Treatment of Heavy Metal Pollution, Changsha 410083 (China)

    2013-04-15

    Highlights: ► A novel model was conducted to estimate volumetric nitrogen conversion rates. ► The packing patterns of the granules in Anammox reactor are investigated. ► The simple cubic packing pattern was simulated in high-rate Anammox UASB reactor. ► Operational strategies concerning sludge concentration were proposed by the modeling. -- Abstract: A novel mathematical model was developed to estimate the volumetric nitrogen conversion rates of a high-rate Anammox UASB reactor based on the packing patterns of granular sludge. A series of relationships among granular packing density, sludge concentration, hydraulic retention time and volumetric conversion rate were constructed to correlate Anammox reactor performance with granular packing patterns. It was suggested that the Anammox granules packed as the equivalent simple cubic pattern in high-rate UASB reactor with packing density of 50–55%, which not only accommodated a high concentration of sludge inside the reactor, but also provided large pore volume, thus prolonging the actual substrate conversion time. Results also indicated that it was necessary to improve Anammox reactor performance by enhancing substrate loading when sludge concentration was higher than 37.8 gVSS/L. The established model was carefully calibrated and verified, and it well simulated the performance of granule-based high-rate Anammox UASB reactor.

  3. A Simple Physics-Based Model Predicts Oil Production from Thousands of Horizontal Wells in Shales

    KAUST Repository

    Patzek, Tadeusz

    2017-10-18

    Over the last six years, crude oil production from shales and ultra-deep GOM in the United States has accounted for most of the net increase of global oil production. Therefore, it is important to have a good predictive model of oil production and ultimate recovery in shale wells. Here we introduce a simple model of producing oil and solution gas from the horizontal hydrofractured wells. This model is consistent with the basic physics and geometry of the extraction process. We then apply our model thousands of wells in the Eagle Ford shale. Given well geometry, we obtain a one-dimensional nonlinear pressure diffusion equation that governs flow of mostly oil and solution gas. In principle, solutions of this equation depend on many parameters, but in practice and within a given oil shale, all but three can be fixed at typical values, leading to a nonlinear diffusion problem we linearize and solve exactly with a scaling

  4. Correlations and Non-Linear Probability Models

    DEFF Research Database (Denmark)

    Breen, Richard; Holm, Anders; Karlson, Kristian Bernt

    2014-01-01

    the dependent variable of the latent variable model and its predictor variables. We show how this correlation can be derived from the parameters of non-linear probability models, develop tests for the statistical significance of the derived correlation, and illustrate its usefulness in two applications. Under......Although the parameters of logit and probit and other non-linear probability models are often explained and interpreted in relation to the regression coefficients of an underlying linear latent variable model, we argue that they may also be usefully interpreted in terms of the correlations between...... certain circumstances, which we explain, the derived correlation provides a way of overcoming the problems inherent in cross-sample comparisons of the parameters of non-linear probability models....

  5. An intercomparison of mesoscale models at simple sites for wind energy applications

    DEFF Research Database (Denmark)

    Olsen, Bjarke Tobias; Hahmann, Andrea N.; Sempreviva, Anna Maria

    2017-01-01

    of the output from 25 NWP models is presented for three sites in northern Europe characterized by simple terrain. The models are evaluated sing a number of statistical properties relevant to wind energy and verified with observations. On average the models have small wind speed biases offshore and aloft ( ... %) and larger biases closer to the surface over land (> 7 %). A similar pattern is detected for the inter-model spread. Strongly stable and strongly unstable atmospheric stability conditions are associated with larger wind speed errors. Strong indications are found that using a grid spacing larger than 3 km...... decreases the accuracy of the models, but we found no evidence that using a grid spacing smaller than 3 km is necessary for these simple sites. Applying the models to a simple wind energy offshore wind farm highlights the importance of capturing the correct distributions of wind speed and direction....

  6. Simple model of string with colour degrees of freedom

    Science.gov (United States)

    Hadasz, Leszek

    1994-03-01

    We consider a simple model of string with colour charges on its ends. The model is constructed by rewriting the action describing classical spinless as well as spinning particles with colour charge in terms of fields living on the “string worldsheet” bounded by trajectories of the particles.

  7. Simple mathematical models for housing allocation to a homeless ...

    African Journals Online (AJOL)

    We present simple mathematical models for modelling a homeless population and housing allocation. We look at a situation whereby the local authority makes temporary accommodation available for some of the homeless for a while and we examine how this affects the number of families homeless at any given time.

  8. A simple model for simultaneous methanogenic-denitrification systems

    DEFF Research Database (Denmark)

    Garibay-Orijel, C.; Ahring, Birgitte Kiær; Rinderknecht-Seijas, N.

    2006-01-01

    We describe a useful and simple model for studies of simultaneous methanogenic-denitrification (M-D) systems. One equation predicts an inverse relationship between the percentage of electron donor channeled into dissimilatory denitrification and the loading ratio X given by grams degradable COD per...

  9. Simple model for low-frequency guitar function

    DEFF Research Database (Denmark)

    Christensen, Ove; Vistisen, Bo B.

    1980-01-01

    - frequency guitar function. The model predicts frequency responce of sound pressure and top plate mobility which are in close quantitative agreement with experimental responses. The absolute sound pressure level and mobility level are predicted to within a few decibels, and the equivalent piston area......The frequency response of sound pressure and top plate mobility is studied around the two first resonances of the guitar. These resonances are shown to result from a coupling between the fundamental top plate mode and the Helmholtz resonance of the cavity. A simple model is proposed for low...

  10. Data-based Non-Markovian Model Inference

    Science.gov (United States)

    Ghil, Michael

    2015-04-01

    This talk concentrates on obtaining stable and efficient data-based models for simulation and prediction in the geosciences and life sciences. The proposed model derivation relies on using a multivariate time series of partial observations from a large-dimensional system, and the resulting low-order models are compared with the optimal closures predicted by the non-Markovian Mori-Zwanzig formalism of statistical physics. Multilayer stochastic models (MSMs) are introduced as both a very broad generalization and a time-continuous limit of existing multilevel, regression-based approaches to data-based closure, in particular of empirical model reduction (EMR). We show that the multilayer structure of MSMs can provide a natural Markov approximation to the generalized Langevin equation (GLE) of the Mori-Zwanzig formalism. A simple correlation-based stopping criterion for an EMR-MSM model is derived to assess how well it approximates the GLE solution. Sufficient conditions are given for the nonlinear cross-interactions between the constitutive layers of a given MSM to guarantee the existence of a global random attractor. This existence ensures that no blow-up can occur for a very broad class of MSM applications. The EMR-MSM methodology is first applied to a conceptual, nonlinear, stochastic climate model of coupled slow and fast variables, in which only slow variables are observed. The resulting reduced model with energy-conserving nonlinearities captures the main statistical features of the slow variables, even when there is no formal scale separation and the fast variables are quite energetic. Second, an MSM is shown to successfully reproduce the statistics of a partially observed, generalized Lokta-Volterra model of population dynamics in its chaotic regime. The positivity constraint on the solutions' components replaces here the quadratic-energy-preserving constraint of fluid-flow problems and it successfully prevents blow-up. This work is based on a close

  11. Simple, fast and accurate two-diode model for photovoltaic modules

    Energy Technology Data Exchange (ETDEWEB)

    Ishaque, Kashif; Salam, Zainal; Taheri, Hamed [Faculty of Electrical Engineering, Universiti Teknologi Malaysia, UTM 81310, Skudai, Johor Bahru (Malaysia)

    2011-02-15

    This paper proposes an improved modeling approach for the two-diode model of photovoltaic (PV) module. The main contribution of this work is the simplification of the current equation, in which only four parameters are required, compared to six or more in the previously developed two-diode models. Furthermore the values of the series and parallel resistances are computed using a simple and fast iterative method. To validate the accuracy of the proposed model, six PV modules of different types (multi-crystalline, mono-crystalline and thin-film) from various manufacturers are tested. The performance of the model is evaluated against the popular single diode models. It is found that the proposed model is superior when subjected to irradiance and temperature variations. In particular the model matches very accurately for all important points of the I-V curves, i.e. the peak power, short-circuit current and open circuit voltage. The modeling method is useful for PV power converter designers and circuit simulator developers who require simple, fast yet accurate model for the PV module. (author)

  12. Analysis of changes in tornadogenesis conditions over Northern Eurasia based on a simple index of atmospheric convective instability

    Science.gov (United States)

    Chernokulsky, A. V.; Kurgansky, M. V.; Mokhov, I. I.

    2017-12-01

    A simple index of convective instability (3D-index) is used for analysis of weather and climate processes that favor to the occurrence of severe convective events including tornadoes. The index is based on information on the surface air temperature and humidity. The prognostic ability of the index to reproduce severe convective events (thunderstorms, showers, tornadoes) is analyzed. It is shown that most tornadoes in North Eurasia are characterized by high values of the 3D-index; furthermore, the 3D-index is significantly correlated with the available convective potential energy. Reanalysis data (for recent decades) and global climate model simulations (for the 21st century) show an increase in the frequency of occurrence of favorable for tornado formation meteorological conditions in the regions of Northern Eurasia. The most significant increase is found on the Black Sea coast and in the south of the Far East.

  13. A simple model to estimate the optimal doping of p - Type oxide superconductors

    Directory of Open Access Journals (Sweden)

    Adir Moysés Luiz

    2008-12-01

    Full Text Available Oxygen doping of superconductors is discussed. Doping high-Tc superconductors with oxygen seems to be more efficient than other doping procedures. Using the assumption of double valence fluctuations, we present a simple model to estimate the optimal doping of p-type oxide superconductors. The experimental values of oxygen content for optimal doping of the most important p-type oxide superconductors can be accounted for adequately using this simple model. We expect that our simple model will encourage further experimental and theoretical researches in superconducting materials.

  14. Individual-based modeling of fish: Linking to physical models and water quality.

    Energy Technology Data Exchange (ETDEWEB)

    Rose, K.A.

    1997-08-01

    The individual-based modeling approach for the simulating fish population and community dynamics is gaining popularity. Individual-based modeling has been used in many other fields, such as forest succession and astronomy. The popularity of the individual-based approach is partly a result of the lack of success of the more aggregate modeling approaches traditionally used for simulating fish population and community dynamics. Also, recent recognition that it is often the atypical individual that survives has fostered interest in the individual-based approach. Two general types of individual-based models are distribution and configuration. Distribution models follow the probability distributions of individual characteristics, such as length and age. Configuration models explicitly simulate each individual; the sum over individuals being the population. DeAngelis et al (1992) showed that, when distribution and configuration models were formulated from the same common pool of information, both approaches generated similar predictions. The distribution approach was more compact and general, while the configuration approach was more flexible. Simple biological changes, such as making growth rate dependent on previous days growth rates, were easy to implement in the configuration version but prevented simple analytical solution of the distribution version.

  15. Diffraction enhanced imaging: a simple model

    International Nuclear Information System (INIS)

    Zhu Peiping; Yuan Qingxi; Huang Wanxia; Wang Junyue; Shu Hang; Chen Bo; Liu Yijin; Li Enrong; Wu Ziyu

    2006-01-01

    Based on pinhole imaging and conventional x-ray projection imaging, a more general DEI (diffraction enhanced imaging) equation is derived using simple concepts in this paper. Not only can the new DEI equation explain all the same problems as with the DEI equation proposed by Chapman, but also some problems that cannot be explained with the old DEI equation, such as the noise background caused by small angle scattering diffracted by the analyser

  16. Diffraction enhanced imaging: a simple model

    Energy Technology Data Exchange (ETDEWEB)

    Zhu Peiping; Yuan Qingxi; Huang Wanxia; Wang Junyue; Shu Hang; Chen Bo; Liu Yijin; Li Enrong; Wu Ziyu [Beijing Synchrotron Radiation Facility, Institute of High Energy Physics, Chinese Academy of Sciences, Beijing 100049 (China)

    2006-10-07

    Based on pinhole imaging and conventional x-ray projection imaging, a more general DEI (diffraction enhanced imaging) equation is derived using simple concepts in this paper. Not only can the new DEI equation explain all the same problems as with the DEI equation proposed by Chapman, but also some problems that cannot be explained with the old DEI equation, such as the noise background caused by small angle scattering diffracted by the analyser.

  17. Simple Urban Simulation Atop Complicated Models: Multi-Scale Equation-Free Computing of Sprawl Using Geographic Automata

    Directory of Open Access Journals (Sweden)

    Yu Zou

    2013-07-01

    Full Text Available Reconciling competing desires to build urban models that can be simple and complicated is something of a grand challenge for urban simulation. It also prompts difficulties in many urban policy situations, such as urban sprawl, where simple, actionable ideas may need to be considered in the context of the messily complex and complicated urban processes and phenomena that work within cities. In this paper, we present a novel architecture for achieving both simple and complicated realizations of urban sprawl in simulation. Fine-scale simulations of sprawl geography are run using geographic automata to represent the geographical drivers of sprawl in intricate detail and over fine resolutions of space and time. We use Equation-Free computing to deploy population as a coarse observable of sprawl, which can be leveraged to run automata-based models as short-burst experiments within a meta-simulation framework.

  18. Simple, fast, and low-cost camera-based water content measurement with colorimetric fluorescent indicator

    Science.gov (United States)

    Song, Seok-Jeong; Kim, Tae-Il; Kim, Youngmi; Nam, Hyoungsik

    2018-05-01

    Recently, a simple, sensitive, and low-cost fluorescent indicator has been proposed to determine water contents in organic solvents, drugs, and foodstuffs. The change of water content leads to the change of the indicator's fluorescence color under the ultra-violet (UV) light. Whereas the water content values could be estimated from the spectrum obtained by a bulky and expensive spectrometer in the previous research, this paper demonstrates a simple and low-cost camera-based water content measurement scheme with the same fluorescent water indicator. Water content is calculated over the range of 0-30% by quadratic polynomial regression models with color information extracted from the captured images of samples. Especially, several color spaces such as RGB, xyY, L∗a∗b∗, u‧v‧, HSV, and YCBCR have been investigated to establish the optimal color information features over both linear and nonlinear RGB data given by a camera before and after gamma correction. In the end, a 2nd order polynomial regression model along with HSV in a linear domain achieves the minimum mean square error of 1.06% for a 3-fold cross validation method. Additionally, the resultant water content estimation model is implemented and evaluated in an off-the-shelf Android-based smartphone.

  19. Information Theory for Correlation Analysis and Estimation of Uncertainty Reduction in Maps and Models

    Directory of Open Access Journals (Sweden)

    J. Florian Wellmann

    2013-04-01

    Full Text Available The quantification and analysis of uncertainties is important in all cases where maps and models of uncertain properties are the basis for further decisions. Once these uncertainties are identified, the logical next step is to determine how they can be reduced. Information theory provides a framework for the analysis of spatial uncertainties when different subregions are considered as random variables. In the work presented here, joint entropy, conditional entropy, and mutual information are applied for a detailed analysis of spatial uncertainty correlations. The aim is to determine (i which areas in a spatial analysis share information, and (ii where, and by how much, additional information would reduce uncertainties. As an illustration, a typical geological example is evaluated: the case of a subsurface layer with uncertain depth, shape and thickness. Mutual information and multivariate conditional entropies are determined based on multiple simulated model realisations. Even for this simple case, the measures not only provide a clear picture of uncertainties and their correlations but also give detailed insights into the potential reduction of uncertainties at each position, given additional information at a different location. The methods are directly applicable to other types of spatial uncertainty evaluations, especially where multiple realisations of a model simulation are analysed. In summary, the application of information theoretic measures opens up the path to a better understanding of spatial uncertainties, and their relationship to information and prior knowledge, for cases where uncertain property distributions are spatially analysed and visualised in maps and models.

  20. Urban Link Travel Time Prediction Based on a Gradient Boosting Method Considering Spatiotemporal Correlations

    Directory of Open Access Journals (Sweden)

    Faming Zhang

    2016-11-01

    Full Text Available The prediction of travel times is challenging because of the sparseness of real-time traffic data and the intrinsic uncertainty of travel on congested urban road networks. We propose a new gradient–boosted regression tree method to accurately predict travel times. This model accounts for spatiotemporal correlations extracted from historical and real-time traffic data for adjacent and target links. This method can deliver high prediction accuracy by combining simple regression trees with poor performance. It corrects the error found in existing models for improved prediction accuracy. Our spatiotemporal gradient–boosted regression tree model was verified in experiments. The training data were obtained from big data reflecting historic traffic conditions collected by probe vehicles in Wuhan from January to May 2014. Real-time data were extracted from 11 weeks of GPS records collected in Wuhan from 5 May 2014 to 20 July 2014. Based on these data, we predicted link travel time for the period from 21 July 2014 to 25 July 2014. Experiments showed that our proposed spatiotemporal gradient–boosted regression tree model obtained better results than gradient boosting, random forest, or autoregressive integrated moving average approaches. Furthermore, these results indicate the advantages of our model for urban link travel time prediction.

  1. Some simple applications of probability models to birth intervals

    International Nuclear Information System (INIS)

    Shrestha, G.

    1987-07-01

    An attempt has been made in this paper to apply some simple probability models to birth intervals under the assumption of constant fecundability and varying fecundability among women. The parameters of the probability models are estimated by using the method of moments and the method of maximum likelihood. (author). 9 refs, 2 tabs

  2. A Novel Multilayer Correlation Maximization Model for Improving CCA-Based Frequency Recognition in SSVEP Brain-Computer Interface.

    Science.gov (United States)

    Jiao, Yong; Zhang, Yu; Wang, Yu; Wang, Bei; Jin, Jing; Wang, Xingyu

    2018-05-01

    Multiset canonical correlation analysis (MsetCCA) has been successfully applied to optimize the reference signals by extracting common features from multiple sets of electroencephalogram (EEG) for steady-state visual evoked potential (SSVEP) recognition in brain-computer interface application. To avoid extracting the possible noise components as common features, this study proposes a sophisticated extension of MsetCCA, called multilayer correlation maximization (MCM) model for further improving SSVEP recognition accuracy. MCM combines advantages of both CCA and MsetCCA by carrying out three layers of correlation maximization processes. The first layer is to extract the stimulus frequency-related information in using CCA between EEG samples and sine-cosine reference signals. The second layer is to learn reference signals by extracting the common features with MsetCCA. The third layer is to re-optimize the reference signals set in using CCA with sine-cosine reference signals again. Experimental study is implemented to validate effectiveness of the proposed MCM model in comparison with the standard CCA and MsetCCA algorithms. Superior performance of MCM demonstrates its promising potential for the development of an improved SSVEP-based brain-computer interface.

  3. Underground economy modelling: simple models with complicated dynamics

    OpenAIRE

    Albu, Lucian-Liviu

    2003-01-01

    The paper aims to model the underground economy using two different models: one based on the labor supply method and a generalized model for the allocation of time. The model based on the labor supply method is conceived as a simulating one in order to determine some reasonable thresholds of the underground sector extension based only on the available macroeconomic statistical data. The generalized model for the allocation of time is a model based on direct approach which estimates the underg...

  4. Flexible Bayesian Dynamic Modeling of Covariance and Correlation Matrices

    KAUST Repository

    Lan, Shiwei; Holbrook, Andrew; Fortin, Norbert J.; Ombao, Hernando; Shahbaba, Babak

    2017-01-01

    Modeling covariance (and correlation) matrices is a challenging problem due to the large dimensionality and positive-definiteness constraint. In this paper, we propose a novel Bayesian framework based on decomposing the covariance matrix

  5. Probabilistic Model-based Background Subtraction

    DEFF Research Database (Denmark)

    Krüger, Volker; Anderson, Jakob; Prehn, Thomas

    2005-01-01

    is the correlation between pixels. In this paper we introduce a model-based background subtraction approach which facilitates prior knowledge of pixel correlations for clearer and better results. Model knowledge is being learned from good training video data, the data is stored for fast access in a hierarchical...

  6. SIMPL: A Simplified Model-Based Program for the Analysis and Visualization of Groundwater Rebound in Abandoned Mines to Prevent Contamination of Water and Soils by Acid Mine Drainage

    Directory of Open Access Journals (Sweden)

    Sung-Min Kim

    2018-05-01

    Full Text Available Cessation of dewatering following underground mine closure typically results in groundwater rebound, because mine voids and surrounding strata undergo flooding up to the levels of the decant points, such as shafts and drifts. SIMPL (Simplified groundwater program In Mine workings using the Pipe equation and Lumped parameter model, a simplified lumped parameter model-based program for predicting groundwater levels in abandoned mines, is presented herein. The program comprises a simulation engine module, 3D visualization module, and graphical user interface, which aids data processing, analysis, and visualization of results. The 3D viewer facilitates effective visualization of the predicted groundwater level rebound phenomenon together with a topographic map, mine drift, goaf, and geological properties from borehole data. SIMPL is applied to data from the Dongwon coal mine and Dalsung copper mine in Korea, with strong similarities in simulated and observed results. By considering mine workings and interpond connections, SIMPL can thus be used to effectively analyze and visualize groundwater rebound. In addition, the predictions by SIMPL can be utilized to prevent the surrounding environment (water and soil from being polluted by acid mine drainage.

  7. Simple model for decay of laser generated shock waves

    International Nuclear Information System (INIS)

    Trainor, R.J.

    1980-01-01

    A simple model is derived to calculate the hydrodynamic decay of laser-generated shock waves. Comparison with detailed hydrocode simulations shows good agreement between calculated time evolution of shock pressure, position, and instantaneous pressure profile. Reliability of the model decreases in regions of the target where superthermal-electron preheat effects become comparable to shock effects

  8. Simple Crosscutting Concerns Are Not So Simple : Analysing Variability in Large-Scale Idioms-Based Implementations

    NARCIS (Netherlands)

    Bruntink, M.; Van Deursen, A.; d’Hondt, M.; Tourwé, T.

    2007-01-01

    This paper describes a method for studying idioms-based implementations of crosscutting concerns, and our experiences with it in the context of a real-world, large-scale embedded software system. In particular, we analyse a seemingly simple concern, tracing, and show that it exhibits significant

  9. A simple dynamic subgrid-scale model for LES of particle-laden turbulence

    Science.gov (United States)

    Park, George Ilhwan; Bassenne, Maxime; Urzay, Javier; Moin, Parviz

    2017-04-01

    In this study, a dynamic model for large-eddy simulations is proposed in order to describe the motion of small inertial particles in turbulent flows. The model is simple, involves no significant computational overhead, contains no adjustable parameters, and is flexible enough to be deployed in any type of flow solvers and grids, including unstructured setups. The approach is based on the use of elliptic differential filters to model the subgrid-scale velocity. The only model parameter, which is related to the nominal filter width, is determined dynamically by imposing consistency constraints on the estimated subgrid energetics. The performance of the model is tested in large-eddy simulations of homogeneous-isotropic turbulence laden with particles, where improved agreement with direct numerical simulation results is observed in the dispersed-phase statistics, including particle acceleration, local carrier-phase velocity, and preferential-concentration metrics.

  10. Simple models of district heating systems for load and demand side management and operational optimisation; Simple modeller for fjernvarmesystemer med henblik pae belastningsudjaevning og driftsoptimering

    Energy Technology Data Exchange (ETDEWEB)

    Boehm, B. [Technical Univ. of Denmark, Dept. of Mechanical Engineering, Kgs. Lyngby (Denmark); Larsen, H.V. [Risoe National Lab., System Analysis Dept., Roskilde (DK)

    2004-12-01

    The purpose of this research project has been to further develop and test simple (aggregated) models of district heating (DH) systems for simulation and operational optimization, and to investigate the influence of Load Management and Demand Side Management (DMS) on the total operational costs. The work is based on physical-mathematical modelling and simulation of DH systems, and is a continuation of previous EFP-96 work. In the present EFP-2001 project the goals have been to improve the Danish method of aggregation by addressing the problem of aggregation of pressure losses, and to test the methods on a much larger data set than in the EFP-1996 project. In order to verify the models it is crucial to have good data at disposal. Full information on the heat loads and temperatures not only at the DH plant but also at every consumer (building) is needed, and therefore only a few DH systems in Denmark can supply such data. (BA)

  11. Redundant correlation effect on personalized recommendation

    Science.gov (United States)

    Qiu, Tian; Han, Teng-Yue; Zhong, Li-Xin; Zhang, Zi-Ke; Chen, Guang

    2014-02-01

    The high-order redundant correlation effect is investigated for a hybrid algorithm of heat conduction and mass diffusion (HHM), through both heat conduction biased (HCB) and mass diffusion biased (MDB) correlation redundancy elimination processes. The HCB and MDB algorithms do not introduce any additional tunable parameters, but keep the simple character of the original HHM. Based on two empirical datasets, the Netflix and MovieLens, the HCB and MDB are found to show better recommendation accuracy for both the overall objects and the cold objects than the HHM algorithm. Our work suggests that properly eliminating the high-order redundant correlations can provide a simple and effective approach to accurate recommendation.

  12. Variance-based sensitivity indices for stochastic models with correlated inputs

    Energy Technology Data Exchange (ETDEWEB)

    Kala, Zdeněk [Brno University of Technology, Faculty of Civil Engineering, Department of Structural Mechanics Veveří St. 95, ZIP 602 00, Brno (Czech Republic)

    2015-03-10

    The goal of this article is the formulation of the principles of one of the possible strategies in implementing correlation between input random variables so as to be usable for algorithm development and the evaluation of Sobol’s sensitivity analysis. With regard to the types of stochastic computational models, which are commonly found in structural mechanics, an algorithm was designed for effective use in conjunction with Monte Carlo methods. Sensitivity indices are evaluated for all possible permutations of the decorrelation procedures for input parameters. The evaluation of Sobol’s sensitivity coefficients is illustrated on an example in which a computational model was used for the analysis of the resistance of a steel bar in tension with statistically dependent input geometric characteristics.

  13. Variance-based sensitivity indices for stochastic models with correlated inputs

    International Nuclear Information System (INIS)

    Kala, Zdeněk

    2015-01-01

    The goal of this article is the formulation of the principles of one of the possible strategies in implementing correlation between input random variables so as to be usable for algorithm development and the evaluation of Sobol’s sensitivity analysis. With regard to the types of stochastic computational models, which are commonly found in structural mechanics, an algorithm was designed for effective use in conjunction with Monte Carlo methods. Sensitivity indices are evaluated for all possible permutations of the decorrelation procedures for input parameters. The evaluation of Sobol’s sensitivity coefficients is illustrated on an example in which a computational model was used for the analysis of the resistance of a steel bar in tension with statistically dependent input geometric characteristics

  14. Generalized correlation of latent heats of vaporization of coal liquid model compounds between their freezing points and critical points

    Energy Technology Data Exchange (ETDEWEB)

    Sivaraman, A.; Kobuyashi, R.; Mayee, J.W.

    1984-02-01

    Based on Pitzer's three-parameter corresponding states principle, the authors have developed a correlation of the latent heat of vaporization of aromatic coal liquid model compounds for a temperature range from the freezing point to the critical point. An expansion of the form L = L/sub 0/ + ..omega..L /sub 1/ is used for the dimensionless latent heat of vaporization. This model utilizes a nonanalytic functional form based on results derived from renormalization group theory of fluids in the vicinity of the critical point. A simple expression for the latent heat of vaporization L = D/sub 1/epsilon /SUP 0.3333/ + D/sub 2/epsilon /SUP 0.8333/ + D/sub 4/epsilon /SUP 1.2083/ + E/sub 1/epsilon + E/sub 2/epsilon/sup 2/ + E/sub 3/epsilon/sup 3/ is cast in a corresponding states principle correlation for coal liquid compounds. Benzene, the basic constituent of the functional groups of the multi-ring coal liquid compounds, is used as the reference compound in the present correlation. This model works very well at both low and high reduced temperatures approaching the critical point (0.02 < epsilon = (T /SUB c/ - T)/(T /SUB c/- 0.69)). About 16 compounds, including single, two, and three-ring compounds, have been tested and the percent root-mean-square deviations in latent heat of vaporization reported and estimated through the model are 0.42 to 5.27%. Tables of the coefficients of L/sub 0/ and L/sub 1/ are presented. The contributing terms of the latent heat of vaporization function are also presented in a table for small increments of epsilon.

  15. A simple flow-concentration modelling method for integrating water ...

    African Journals Online (AJOL)

    A simple flow-concentration modelling method for integrating water quality and ... flow requirements are assessed for maintenance low flow, drought low flow ... the instream concentrations of chemical constituents that will arise from different ...

  16. Rule-based decision making model

    International Nuclear Information System (INIS)

    Sirola, Miki

    1998-01-01

    A rule-based decision making model is designed in G2 environment. A theoretical and methodological frame for the model is composed and motivated. The rule-based decision making model is based on object-oriented modelling, knowledge engineering and decision theory. The idea of safety objective tree is utilized. Advanced rule-based methodologies are applied. A general decision making model 'decision element' is constructed. The strategy planning of the decision element is based on e.g. value theory and utility theory. A hypothetical process model is built to give input data for the decision element. The basic principle of the object model in decision making is division in tasks. Probability models are used in characterizing component availabilities. Bayes' theorem is used to recalculate the probability figures when new information is got. The model includes simple learning features to save the solution path. A decision analytic interpretation is given to the decision making process. (author)

  17. New perspectives in face correlation: discrimination enhancement in face recognition based on iterative algorithm

    Science.gov (United States)

    Wang, Q.; Alfalou, A.; Brosseau, C.

    2016-04-01

    Here, we report a brief review on the recent developments of correlation algorithms. Several implementation schemes and specific applications proposed in recent years are also given to illustrate powerful applications of these methods. Following a discussion and comparison of the implementation of these schemes, we believe that all-numerical implementation is the most practical choice for application of the correlation method because the advantages of optical processing cannot compensate the technical and/or financial cost needed for an optical implementation platform. We also present a simple iterative algorithm to optimize the training images of composite correlation filters. By making use of three or four iterations, the peak-to-correlation energy (PCE) value of correlation plane can be significantly enhanced. A simulation test using the Pointing Head Pose Image Database (PHPID) illustrates the effectiveness of this statement. Our method can be applied in many composite filters based on linear composition of training images as an optimization means.

  18. A new Expert Finding model based on Term Correlation Matrix

    Directory of Open Access Journals (Sweden)

    Ehsan Pornour

    2015-09-01

    Full Text Available Due to the enormous volume of unstructured information available on the Web and inside organization, finding an answer to the knowledge need in a short time is difficult. For this reason, beside Search Engines which don’t consider users individual characteristics, Recommender systems were created which use user’s previous activities and other individual characteristics to help users find needed knowledge. Recommender systems usage is increasing every day. Expert finder systems also by introducing expert people instead of recommending information to users have provided this facility for users to ask their questions form experts. Having relation with experts not only causes information transition, but also with transferring experiences and inception causes knowledge transition. In this paper we used university professors academic resume as expert people profile and then proposed a new expert finding model that recommends experts to users query. We used Term Correlation Matrix, Vector Space Model and PageRank algorithm and proposed a new hybrid model which outperforms conventional methods. This model can be used in internet environment, organizations and universities that experts have resume dataset.

  19. Read-only high accuracy volume holographic optical correlator

    Science.gov (United States)

    Zhao, Tian; Li, Jingming; Cao, Liangcai; He, Qingsheng; Jin, Guofan

    2011-10-01

    A read-only volume holographic correlator (VHC) is proposed. After the recording of all of the correlation database pages by angular multiplexing, a stand-alone read-only high accuracy VHC will be separated from the VHC recording facilities which include the high-power laser and the angular multiplexing system. The stand-alone VHC has its own low power readout laser and very compact and simple structure. Since there are two lasers that are employed for recording and readout, respectively, the optical alignment tolerance of the laser illumination on the SLM is very sensitive. The twodimensional angular tolerance is analyzed based on the theoretical model of the volume holographic correlator. The experimental demonstration of the proposed read-only VHC is introduced and discussed.

  20. Evaluation of a simple, point-scale hydrologic model in simulating soil moisture using the Delaware environmental observing system

    Science.gov (United States)

    Legates, David R.; Junghenn, Katherine T.

    2018-04-01

    Many local weather station networks that measure a number of meteorological variables (i.e. , mesonetworks) have recently been established, with soil moisture occasionally being part of the suite of measured variables. These mesonetworks provide data from which detailed estimates of various hydrological parameters, such as precipitation and reference evapotranspiration, can be made which, when coupled with simple surface characteristics available from soil surveys, can be used to obtain estimates of soil moisture. The question is Can meteorological data be used with a simple hydrologic model to estimate accurately daily soil moisture at a mesonetwork site? Using a state-of-the-art mesonetwork that also includes soil moisture measurements across the US State of Delaware, the efficacy of a simple, modified Thornthwaite/Mather-based daily water balance model based on these mesonetwork observations to estimate site-specific soil moisture is determined. Results suggest that the model works reasonably well for most well-drained sites and provides good qualitative estimates of measured soil moisture, often near the accuracy of the soil moisture instrumentation. The model exhibits particular trouble in that it cannot properly simulate the slow drainage that occurs in poorly drained soils after heavy rains and interception loss, resulting from grass not being short cropped as expected also adversely affects the simulation. However, the model could be tuned to accommodate some non-standard siting characteristics.

  1. Unidimensional factor models imply weaker partial correlations than zero-order correlations.

    Science.gov (United States)

    van Bork, Riet; Grasman, Raoul P P P; Waldorp, Lourens J

    2018-06-01

    In this paper we present a new implication of the unidimensional factor model. We prove that the partial correlation between two observed variables that load on one factor given any subset of other observed variables that load on this factor lies between zero and the zero-order correlation between these two observed variables. We implement this result in an empirical bootstrap test that rejects the unidimensional factor model when partial correlations are identified that are either stronger than the zero-order correlation or have a different sign than the zero-order correlation. We demonstrate the use of the test in an empirical data example with data consisting of fourteen items that measure extraversion.

  2. Modelling Bose-Einstein correlations at LEP-2

    International Nuclear Information System (INIS)

    Loennblad, L.

    1998-01-01

    Some pros and cons of different strategies for modelling Bose-Einstein correlations in event generators for fully hadronic WW events at LEP-2 are discussed. A few new algorithms based on shifting final-state momenta of identical bosons in WW events generated by PYTHIA are also presented and the resulting predictions for the effects on the W mass measurement are discussed. (author)

  3. Simple inflationary quintessential model. II. Power law potentials

    Science.gov (United States)

    de Haro, Jaume; Amorós, Jaume; Pan, Supriya

    2016-09-01

    The present work is a sequel of our previous work [Phys. Rev. D 93, 084018 (2016)] which depicted a simple version of an inflationary quintessential model whose inflationary stage was described by a Higgs-type potential and the quintessential phase was responsible due to an exponential potential. Additionally, the model predicted a nonsingular universe in past which was geodesically past incomplete. Further, it was also found that the model is in agreement with the Planck 2013 data when running is allowed. But, this model provides a theoretical value of the running which is far smaller than the central value of the best fit in ns , r , αs≡d ns/d l n k parameter space where ns, r , αs respectively denote the spectral index, tensor-to-scalar ratio and the running of the spectral index associated with any inflationary model, and consequently to analyze the viability of the model one has to focus in the two-dimensional marginalized confidence level in the allowed domain of the plane (ns,r ) without taking into account the running. Unfortunately, such analysis shows that this model does not pass this test. However, in this sequel we propose a family of models runs by a single parameter α ∈[0 ,1 ] which proposes another "inflationary quintessential model" where the inflation and the quintessence regimes are respectively described by a power law potential and a cosmological constant. The model is also nonsingular although geodesically past incomplete as in the cited model. Moreover, the present one is found to be more simple compared to the previous model and it is in excellent agreement with the observational data. In fact, we note that, unlike the previous model, a large number of the models of this family with α ∈[0 ,1/2 ) match with both Planck 2013 and Planck 2015 data without allowing the running. Thus, the properties in the current family of models compared to its past companion justify its need for a better cosmological model with the successive

  4. Prediction of potential drug targets based on simple sequence properties

    Directory of Open Access Journals (Sweden)

    Lai Luhua

    2007-09-01

    Full Text Available Abstract Background During the past decades, research and development in drug discovery have attracted much attention and efforts. However, only 324 drug targets are known for clinical drugs up to now. Identifying potential drug targets is the first step in the process of modern drug discovery for developing novel therapeutic agents. Therefore, the identification and validation of new and effective drug targets are of great value for drug discovery in both academia and pharmaceutical industry. If a protein can be predicted in advance for its potential application as a drug target, the drug discovery process targeting this protein will be greatly speeded up. In the current study, based on the properties of known drug targets, we have developed a sequence-based drug target prediction method for fast identification of novel drug targets. Results Based on simple physicochemical properties extracted from protein sequences of known drug targets, several support vector machine models have been constructed in this study. The best model can distinguish currently known drug targets from non drug targets at an accuracy of 84%. Using this model, potential protein drug targets of human origin from Swiss-Prot were predicted, some of which have already attracted much attention as potential drug targets in pharmaceutical research. Conclusion We have developed a drug target prediction method based solely on protein sequence information without the knowledge of family/domain annotation, or the protein 3D structure. This method can be applied in novel drug target identification and validation, as well as genome scale drug target predictions.

  5. A simple multistage closed-(box+reservoir model of chemical evolution

    Directory of Open Access Journals (Sweden)

    Caimmi R.

    2011-01-01

    Full Text Available Simple closed-box (CB models of chemical evolution are extended on two respects, namely (i simple closed-(box+reservoir (CBR models allowing gas outflow from the box into the reservoir (Hartwick 1976 or gas inflow into the box from the reservoir (Caimmi 2007 with rate proportional to the star formation rate, and (ii simple multistage closed-(box+reservoir (MCBR models allowing different stages of evolution characterized by different inflow or outflow rates. The theoretical differential oxygen abundance distribution (TDOD predicted by the model maintains close to a continuous broken straight line. An application is made where a fictitious sample is built up from two distinct samples of halo stars and taken as representative of the inner Galactic halo. The related empirical differential oxygen abundance distribution (EDOD is represented, to an acceptable extent, as a continuous broken line for two viable [O/H]-[Fe/H] empirical relations. The slopes and the intercepts of the regression lines are determined, and then used as input parameters to MCBR models. Within the errors (-+σ, regression line slopes correspond to a large inflow during the earlier stage of evolution and to low or moderate outflow during the subsequent stages. A possible inner halo - outer (metal-poor bulge connection is also briefly discussed. Quantitative results cannot be considered for applications to the inner Galactic halo, unless selection effects and disk contamination are removed from halo samples, and discrepancies between different oxygen abundance determination methods are explained.

  6. Genealogies in simple models of evolution

    International Nuclear Information System (INIS)

    Brunet, Éric; Derrida, Bernard

    2013-01-01

    We review the statistical properties of the genealogies of a few models of evolution. In the asexual case, selection leads to coalescence times which grow logarithmically with the size of the population, in contrast with the linear growth of the neutral case. Moreover for a whole class of models, the statistics of the genealogies are those of the Bolthausen–Sznitman coalescent rather than the Kingman coalescent in the neutral case. For sexual reproduction in the neutral case, the time to reach the first common ancestors for the whole population and the time for all individuals to have all their ancestors in common are also logarithmic in the population size, as predicted by Chang in 1999. We discuss how these times are modified by introducing selection in a simple way. (paper)

  7. pyhector: A Python interface for the simple climate model Hector

    Energy Technology Data Exchange (ETDEWEB)

    N Willner, Sven; Hartin, Corinne; Gieseke, Robert

    2017-04-01

    Pyhector is a Python interface for the simple climate model Hector (Hartin et al. 2015) developed in C++. Simple climate models like Hector can, for instance, be used in the analysis of scenarios within integrated assessment models like GCAM1, in the emulation of complex climate models, and in uncertainty analyses. Hector is an open-source, object oriented, simple global climate carbon cycle model. Its carbon cycle consists of a one pool atmosphere, three terrestrial pools which can be broken down into finer biomes or regions, and four carbon pools in the ocean component. The terrestrial carbon cycle includes primary production and respiration fluxes. The ocean carbon cycle circulates carbon via a simplified thermohaline circulation, calculating air-sea fluxes as well as the marine carbonate system (Hartin et al. 2016). The model input is time series of greenhouse gas emissions; as example scenarios for these the Pyhector package contains the Representative Concentration Pathways (RCPs)2. These were developed to cover the range of baseline and mitigation emissions scenarios and are widely used in climate change research and model intercomparison projects. Using DataFrames from the Python library Pandas (McKinney 2010) as a data structure for the scenarios simplifies generating and adapting scenarios. Other parameters of the Hector model can easily be modified when running the model. Pyhector can be installed using pip from the Python Package Index.3 Source code and issue tracker are available in Pyhector's GitHub repository4. Documentation is provided through Readthedocs5. Usage examples are also contained in the repository as a Jupyter Notebook (Pérez and Granger 2007; Kluyver et al. 2016). Courtesy of the Mybinder project6, the example Notebook can also be executed and modified without installing Pyhector locally.

  8. 3D CFD computations of trasitional flows using DES and a correlation based transition model

    DEFF Research Database (Denmark)

    Sørensen, Niels N.; Bechmann, Andreas; Zahle, Frederik

    2011-01-01

    a circular cylinder from Re = 10 to 1 × 106 reproducing the cylinder drag crisis. The computations show good quantitative and qualitative agreement with the behaviour seen in experiments. This case shows that the methodology performs smoothly from the laminar cases at low Re to the turbulent cases at high Re......The present article describes the application of the correlation based transition model of Menter et al. in combination with the Detached Eddy Simulation (DES) methodology to two cases with large degree of flow separation typically considered difficult to compute. Firstly, the flow is computed over...

  9. Simple anthropometric measures correlate with metabolic risk indicators as strongly as magnetic resonance imaging–measured adipose tissue depots in both HIV-infected and control subjects2

    Science.gov (United States)

    Scherzer, Rebecca; Shen, Wei; Bacchetti, Peter; Kotler, Donald; Lewis, Cora E; Shlipak, Michael G; Heymsfield, Steven B

    2008-01-01

    Background Studies in persons without HIV infection have compared percentage body fat (%BF) and waist circumference as markers of risk for the complications of excess adiposity, but only limited study has been conducted in HIV-infected subjects. Objective We compared anthropometric and magnetic resonance imaging (MRI)–based adiposity measures as correlates of metabolic complications of adiposity in HIV-infected and control subjects. Design The study was a cross-sectional analysis of 666 HIV-positive and 242 control subjects in the Fat Redistribution and Metabolic Change in HIV Infection (FRAM) study assessing body mass index (BMI), waist (WC) and hip (HC) circumferences, waist-to-hip ratio (WHR), %BF, and MRI-measured regional adipose tissue. Study outcomes were 3 metabolic risk variables [homeostatic model assessment (HOMA), triglycerides, and HDL cholesterol]. Analyses were stratified by sex and HIV status and adjusted for demographic, lifestyle, and HIV-related factors. Results In HIV-infected and control subjects, univariate associations with HOMA, triglycerides, and HDL were strongest for WC, MRI-measured visceral adipose tissue, and WHR; in all cases, differences in correlation between the strongest measures for each outcome were small (r ≤ 0.07). Multivariate adjustment found no significant difference for optimally fitting models between the use of anthropometric and MRI measures, and the magnitudes of differences were small (adjusted R2 ≤ 0.06). For HOMA and HDL, WC appeared to be the best anthropometric correlate of metabolic complications, whereas, for triglycerides, the best was WHR. Conclusion Relations of simple anthropometric measures with HOMA, triglycerides, and HDL cholesterol are approximately as strong as MRI-measured whole-body adipose tissue depots in both HIV-infected and control subjects. PMID:18541572

  10. Thermal margin comparison between DAM and simple model

    Energy Technology Data Exchange (ETDEWEB)

    Cha, Jeonghun; Yook, Daesik [Korea Institute of Nuclear Safety, Daejeon (Korea, Republic of)

    2017-01-15

    The nuclear industry in Korea, has considered using a detail analysis model (DAM), which described each rod, to get more thermal margin with the design a dry storage facility for nuclear spent fuel (NSF). A DAM is proposed and a thermal analysis to determine the cladding integrity is performed using test conditions with a homogenized NSF assembly analysis model(Simple model). The result show that according to USA safety criteria, temperature of canister surface has to keep below 500 K in normal condition and 630 K in excess condition. A commercial Computational Fluid Dynamics (CFD) called ANSYS Fluent version 14.5 was used.

  11. Identification of Super Phenix steam generator by a simple polynomial model

    International Nuclear Information System (INIS)

    Rousseau, I.

    1981-01-01

    This note suggests a method of identification for the steam generator of the Super-Phenix fast neutron power plant for simple polynomial models. This approach is justified in the selection of the adaptive control. The identification algorithms presented will be applied to multivariable input-output behaviours. The results obtained with the representation in self-regressive form and by simple polynomial models will be compared and the effect of perturbations on the output signal will be tested, in order to select a good identification algorithm for multivariable adaptive regulation [fr

  12. A simple dynamic energy capacity model

    International Nuclear Information System (INIS)

    Gander, James P.

    2012-01-01

    I develop a simple dynamic model showing how total energy capacity is allocated to two different uses and how these uses and their corresponding energy flows are related and behave through time. The control variable of the model determines the allocation. All the variables of the model are in terms of a composite energy equivalent measured in BTU's. A key focus is on the shadow price of energy capacity and its behavior through time. Another key focus is on the behavior of the control variable that determines the allocation of overall energy capacity. The matching or linking of the model's variables to real world U.S. energy data is undertaken. In spite of some limitations of the data, the model and its behavior fit the data fairly well. Some energy policy implications are discussed. - Highlights: ► The model shows how energy capacity is allocated to current output production versus added energy capacity production. ► Two variables in the allocation are the shadow price of capacity and the control variable that determines the allocation. ► The model was linked to U.S. historical energy data and fit the data quite well. ► In particular, the policy control variable was cyclical and consistent with the model. ► Policy implications relevant to the allocation of energy capacity are discussed briefly.

  13. A Note on the Large Sample Properties of Estimators Based on Generalized Linear Models for Correlated Pseudo-observations

    DEFF Research Database (Denmark)

    Jacobsen, Martin; Martinussen, Torben

    2016-01-01

    Pseudo-values have proven very useful in censored data analysis in complex settings such as multi-state models. It was originally suggested by Andersen et al., Biometrika, 90, 2003, 335 who also suggested to estimate standard errors using classical generalized estimating equation results. These r......Pseudo-values have proven very useful in censored data analysis in complex settings such as multi-state models. It was originally suggested by Andersen et al., Biometrika, 90, 2003, 335 who also suggested to estimate standard errors using classical generalized estimating equation results....... These results were studied more formally in Graw et al., Lifetime Data Anal., 15, 2009, 241 that derived some key results based on a second-order von Mises expansion. However, results concerning large sample properties of estimates based on regression models for pseudo-values still seem unclear. In this paper......, we study these large sample properties in the simple setting of survival probabilities and show that the estimating function can be written as a U-statistic of second order giving rise to an additional term that does not vanish asymptotically. We further show that previously advocated standard error...

  14. On the realism of the re-engineered simple point charge water model

    International Nuclear Information System (INIS)

    Chialvo, A.A.

    1996-01-01

    The realism of the recently proposed high-temperature reparameterization of the simple point charge (SPC) water model [C. D. Berweger, W. F. van Gunsteren, and F. Mueller-Plathe, Chem. Phys. Lett. 232, 429 (1995)] is tested by comparing the simulated microstructure and dielectric properties to the available experimental data. The test indicates that the new parameterization fails dramatically to describe the microstructural and dielectric properties of water at high temperature; it predicts rather strong short-range site endash site pair correlations, even stronger than those for water at ambient conditions, and a threefold smaller dielectric constant. Moreover, the resulting microstructure suggests that the high-temperature force-field parameters would predict a twofold higher critical density. The failure of the high-temperature parameterization is analyzed and some suggestions on alternative choices of the target properties for the weak-coupling are discussed. copyright 1996 American Institute of Physics

  15. Renormalization group analysis of a simple hierarchical fermion model

    International Nuclear Information System (INIS)

    Dorlas, T.C.

    1991-01-01

    A simple hierarchical fermion model is constructed which gives rise to an exact renormalization transformation in a 2-dimensional parameter space. The behaviour of this transformation is studied. It has two hyperbolic fixed points for which the existence of a global critical line is proven. The asymptotic behaviour of the transformation is used to prove the existence of the thermodynamic limit in a certain domain in parameter space. Also the existence of a continuum limit for these theories is investigated using information about the asymptotic renormalization behaviour. It turns out that the 'trivial' fixed point gives rise to a two-parameter family of continuum limits corresponding to that part of parameter space where the renormalization trajectories originate at this fixed point. Although the model is not very realistic it serves as a simple example of the appliclation of the renormalization group to proving the existence of the thermodynamic limit and the continuum limit of lattice models. Moreover, it illustrates possible complications that can arise in global renormalization group behaviour, and that might also be present in other models where no global analysis of the renormalization transformation has yet been achieved. (orig.)

  16. A Simple Exercise Reveals the Way Students Think about Scientific Modeling

    Science.gov (United States)

    Ruebush, Laura; Sulikowski, Michelle; North, Simon

    2009-01-01

    Scientific modeling is an integral part of contemporary science, yet many students have little understanding of how models are developed, validated, and used to predict and explain phenomena. A simple modeling exercise led to significant gains in understanding key attributes of scientific modeling while revealing some stubborn misconceptions.…

  17. A simple atmospheric boundary layer model applied to large eddy simulations of wind turbine wakes

    DEFF Research Database (Denmark)

    Troldborg, Niels; Sørensen, Jens Nørkær; Mikkelsen, Robert Flemming

    2014-01-01

    A simple model for including the influence of the atmospheric boundary layer in connection with large eddy simulations of wind turbine wakes is presented and validated by comparing computed results with measurements as well as with direct numerical simulations. The model is based on an immersed...... boundary type technique where volume forces are used to introduce wind shear and atmospheric turbulence. The application of the model for wake studies is demonstrated by combining it with the actuator line method, and predictions are compared with field measurements. Copyright © 2013 John Wiley & Sons, Ltd....

  18. Quantifying Correlation Uncertainty Risk in Credit Derivatives Pricing

    Directory of Open Access Journals (Sweden)

    Colin Turfus

    2018-04-01

    Full Text Available We propose a simple but practical methodology for the quantification of correlation risk in the context of credit derivatives pricing and credit valuation adjustment (CVA, where the correlation between rates and credit is often uncertain or unmodelled. We take the rates model to be Hull–White (normal and the credit model to be Black–Karasinski (lognormal. We summarise recent work furnishing highly accurate analytic pricing formulae for credit default swaps (CDS including with defaultable Libor flows, extending this to the situation where they are capped and/or floored. We also consider the pricing of contingent CDS with an interest rate swap underlying. We derive therefrom explicit expressions showing how the dependence of model prices on the uncertain parameter(s can be captured in analytic formulae that are readily amenable to computation without recourse to Monte Carlo or lattice-based computation. In so doing, we crucially take into account the impact on model calibration of the uncertain (or unmodelled parameters.

  19. Modelling simple helically delivered dose distributions

    International Nuclear Information System (INIS)

    Fenwick, John D; Tome, Wolfgang A; Kissick, Michael W; Mackie, T Rock

    2005-01-01

    In a previous paper, we described quality assurance procedures for Hi-Art helical tomotherapy machines. Here, we develop further some ideas discussed briefly in that paper. Simple helically generated dose distributions are modelled, and relationships between these dose distributions and underlying characteristics of Hi-Art treatment systems are elucidated. In particular, we describe the dependence of dose levels along the central axis of a cylinder aligned coaxially with a Hi-Art machine on fan beam width, couch velocity and helical delivery lengths. The impact on these dose levels of angular variations in gantry speed or output per linear accelerator pulse is also explored

  20. Modeling Impact of Urbanization in US Cities Using Simple Biosphere Model SiB2

    Science.gov (United States)

    Zhang, Ping; Bounoua, Lahouari; Thome, Kurtis; Wolfe, Robert

    2016-01-01

    We combine Landsat- and the Moderate Resolution Imaging Spectroradiometer (MODIS)-based products, as well as climate drivers from Phase 2 of the North American Land Data Assimilation System (NLDAS-2) in a Simple Biosphere land surface model (SiB2) to assess the impact of urbanization in continental USA (excluding Alaska and Hawaii). More than 300 cities and their surrounding suburban and rural areas are defined in this study to characterize the impact of urbanization on surface climate including surface energy, carbon budget, and water balance. These analyses reveal an uneven impact of urbanization across the continent that should inform upon policy options for improving urban growth including heat mitigation and energy use, carbon sequestration and flood prevention.

  1. How decays and final-state interactions affect velocity correlations in heavy-ion collisions

    International Nuclear Information System (INIS)

    Wieand, K.L.; Pratt, S.E.; Balantekin, A.B.

    1992-01-01

    We study rapidity correlations by calculating two-particle correlation functions and fractorial moments for a simple thermal model of ultrarelativistic-heavy-ion collisions. In this model correlations arise from decays of unstable hadrons and the final-state interactions of the measured particles. These correlations are shown to be similar but smaller than correlations due to phase separation. (orig.)

  2. A Simple Model for Nonlinear Confocal Ultrasonic Beams

    Science.gov (United States)

    Zhang, Dong; Zhou, Lin; Si, Li-Sheng; Gong, Xiu-Fen

    2007-01-01

    A confocally and coaxially arranged pair of focused transmitter and receiver represents one of the best geometries for medical ultrasonic imaging and non-invasive detection. We develop a simple theoretical model for describing the nonlinear propagation of a confocal ultrasonic beam in biological tissues. On the basis of the parabolic approximation and quasi-linear approximation, the nonlinear Khokhlov-Zabolotskaya-Kuznetsov (KZK) equation is solved by using the angular spectrum approach. Gaussian superposition technique is applied to simplify the solution, and an analytical solution for the second harmonics in the confocal ultrasonic beam is presented. Measurements are performed to examine the validity of the theoretical model. This model provides a preliminary model for acoustic nonlinear microscopy.

  3. Simple models of the thermal structure of the Venusian ionosphere

    International Nuclear Information System (INIS)

    Whitten, R.C.; Knudsen, W.C.

    1980-01-01

    Analytical and numerical models of plasma temperatures in the Venusian ionosphere are proposed. The magnitudes of plasma thermal parameters are calculated using thermal-structure data obtained by the Pioneer Venus Orbiter. The simple models are found to be in good agreement with the more detailed models of thermal balance. Daytime and nighttime temperature data along with corresponding temperature profiles are provided

  4. Simple, empirical approach to predict neutron capture cross sections from nuclear masses

    Science.gov (United States)

    Couture, A.; Casten, R. F.; Cakirli, R. B.

    2017-12-01

    Background: Neutron capture cross sections are essential to understanding the astrophysical s and r processes, the modeling of nuclear reactor design and performance, and for a wide variety of nuclear forensics applications. Often, cross sections are needed for nuclei where experimental measurements are difficult. Enormous effort, over many decades, has gone into attempting to develop sophisticated statistical reaction models to predict these cross sections. Such work has met with some success but is often unable to reproduce measured cross sections to better than 40 % , and has limited predictive power, with predictions from different models rapidly differing by an order of magnitude a few nucleons from the last measurement. Purpose: To develop a new approach to predicting neutron capture cross sections over broad ranges of nuclei that accounts for their values where known and which has reliable predictive power with small uncertainties for many nuclei where they are unknown. Methods: Experimental neutron capture cross sections were compared to empirical mass observables in regions of similar structure. Results: We present an extremely simple method, based solely on empirical mass observables, that correlates neutron capture cross sections in the critical energy range from a few keV to a couple hundred keV. We show that regional cross sections are compactly correlated in medium and heavy mass nuclei with the two-neutron separation energy. These correlations are easily amenable to predict unknown cross sections, often converting the usual extrapolations to more reliable interpolations. It almost always reproduces existing data to within 25 % and estimated uncertainties are below about 40 % up to 10 nucleons beyond known data. Conclusions: Neutron capture cross sections display a surprisingly strong connection to the two-neutron separation energy, a nuclear structure property. The simple, empirical correlations uncovered provide model-independent predictions of

  5. Two-channel totally asymmetric simple exclusion processes

    International Nuclear Information System (INIS)

    Pronina, Ekaterina; Kolomeisky, Anatoly B

    2004-01-01

    Totally asymmetric simple exclusion processes, consisting of two coupled parallel lattice chains with particles interacting with hard-core exclusion and moving along the channels and between them, are considered. In the limit of strong coupling between the channels, the particle currents, density profiles and a phase diagram are calculated exactly by mapping the system into an effective one-channel totally asymmetric exclusion model. For intermediate couplings, a simple approximate theory, that describes the particle dynamics in vertical clusters of two corresponding parallel sites exactly and neglects the correlations between different vertical clusters, is developed. It is found that, similarly to the case of one-channel totally asymmetric simple exclusion processes, there are three stationary state phases, although the phase boundaries and stationary properties strongly depend on inter-channel coupling. Extensive computer Monte Carlo simulations fully support the theoretical predictions

  6. From complex to simple: interdisciplinary stochastic models

    International Nuclear Information System (INIS)

    Mazilu, D A; Zamora, G; Mazilu, I

    2012-01-01

    We present two simple, one-dimensional, stochastic models that lead to a qualitative understanding of very complex systems from biology, nanoscience and social sciences. The first model explains the complicated dynamics of microtubules, stochastic cellular highways. Using the theory of random walks in one dimension, we find analytical expressions for certain physical quantities, such as the time dependence of the length of the microtubules, and diffusion coefficients. The second one is a stochastic adsorption model with applications in surface deposition, epidemics and voter systems. We introduce the ‘empty interval method’ and show sample calculations for the time-dependent particle density. These models can serve as an introduction to the field of non-equilibrium statistical physics, and can also be used as a pedagogical tool to exemplify standard statistical physics concepts, such as random walks or the kinetic approach of the master equation. (paper)

  7. A model of the radiation-induced bystander effect based on an analogy with ferromagnets. Application to modelling tissue response in a uniform field

    Science.gov (United States)

    Vassiliev, O. N.

    2014-12-01

    We propose a model of the radiation-induced bystander effect based on an analogy with magnetic systems. The main benefit of this approach is that it allowed us to apply powerful methods of statistical mechanics. The model exploits the similarity between how spin-spin interactions result in correlations of spin states in ferromagnets, and how signalling from a damaged cell reduces chances of survival of neighbour cells, resulting in correlated cell states. At the root of the model is a classical Hamiltonian, similar to that of an Ising ferromagnet with long-range interactions. The formalism is developed in the framework of the Mean Field Theory. It is applied to modelling tissue response in a uniform radiation field. In this case the results are remarkably simple and at the same time nontrivial. They include cell survival curves, expressions for the tumour control probability and effects of fractionation. The model extends beyond of what is normally considered as bystander effects. It offers an insight into low-dose hypersensitivity and into mechanisms behind threshold doses for deterministic effects.

  8. Correlation between relaxations and plastic deformation, and elastic model of flow in metallic glasses and glass-forming liquids

    International Nuclear Information System (INIS)

    Wang Weihua

    2011-01-01

    We study the similarity and correlations between relaxations and plastic deformation in metallic glasses (MGs) and MG-forming liquids. It is shown that the microscope plastic events, the initiation and formation of shear bands, and the mechanical yield in MGs where the atomic sites are topologically unstable induced by applied stress, can be treated as the glass to supercooled liquid state transition induced by external shear stress. On the other hand, the glass transition, the primary and secondary relaxations, plastic deformation and yield can be attributed to the free volume increase induced flow, and the flow can be modeled as the activated hopping between the inherent states in the potential energy landscape. We then propose an extended elastic model to describe the flow based on the energy landscape theory. That is, the flow activation energy density is linear proportional to the instantaneous elastic moduli, and the activation energy density ρ E is determined to be a simple expression of ρ E =(10/11)G+(1/11)K. The model indicates that both shear and bulk moduli are critical parameters accounting for both the homogeneous and inhomogeneous flows in MGs and MG-forming liquids. The elastic model is experimentally certified. We show that the elastic perspectives offers a simple scenario for the flow in MGs and MG-forming liquids and are suggestive for understanding the glass transition, plastic deformation, and nature and characteristics of MGs

  9. Simple analytical model reveals the functional role of embodied sensorimotor interaction in hexapod gaits

    Science.gov (United States)

    Aoi, Shinya; Nachstedt, Timo; Manoonpong, Poramate; Wörgötter, Florentin; Matsuno, Fumitoshi

    2018-01-01

    Insects have various gaits with specific characteristics and can change their gaits smoothly in accordance with their speed. These gaits emerge from the embodied sensorimotor interactions that occur between the insect’s neural control and body dynamic systems through sensory feedback. Sensory feedback plays a critical role in coordinated movements such as locomotion, particularly in stick insects. While many previously developed insect models can generate different insect gaits, the functional role of embodied sensorimotor interactions in the interlimb coordination of insects remains unclear because of their complexity. In this study, we propose a simple physical model that is amenable to mathematical analysis to explain the functional role of these interactions clearly. We focus on a foot contact sensory feedback called phase resetting, which regulates leg retraction timing based on touchdown information. First, we used a hexapod robot to determine whether the distributed decoupled oscillators used for legs with the sensory feedback generate insect-like gaits through embodied sensorimotor interactions. The robot generated two different gaits and one had similar characteristics to insect gaits. Next, we proposed the simple model as a minimal model that allowed us to analyze and explain the gait mechanism through the embodied sensorimotor interactions. The simple model consists of a rigid body with massless springs acting as legs, where the legs are controlled using oscillator phases with phase resetting, and the governed equations are reduced such that they can be explained using only the oscillator phases with some approximations. This simplicity leads to analytical solutions for the hexapod gaits via perturbation analysis, despite the complexity of the embodied sensorimotor interactions. This is the first study to provide an analytical model for insect gaits under these interaction conditions. Our results clarified how this specific foot contact sensory

  10. Externally predictive quantitative modeling of supercooled liquid vapor pressure of polychlorinated-naphthalenes through electron-correlation based quantum-mechanical descriptors.

    Science.gov (United States)

    Vikas; Chayawan

    2014-01-01

    For predicting physico-chemical properties related to environmental fate of molecules, quantitative structure-property relationships (QSPRs) are valuable tools in environmental chemistry. For developing a QSPR, molecular descriptors computed through quantum-mechanical methods are generally employed. The accuracy of a quantum-mechanical method, however, rests on the amount of electron-correlation estimated by the method. In this work, single-descriptor QSPRs for supercooled liquid vapor pressure of chloronaphthalenes and polychlorinated-naphthalenes are developed using molecular descriptors based on the electron-correlation contribution of the quantum-mechanical descriptor. The quantum-mechanical descriptors for which the electron-correlation contribution is analyzed include total-energy, mean polarizability, dipole moment, frontier orbital (HOMO/LUMO) energy, and density-functional theory (DFT) based descriptors, namely, absolute electronegativity, chemical hardness, and electrophilicity index. A total of 40 single-descriptor QSPRs were developed using molecular descriptors computed with advanced semi-empirical (SE) methods, namely, RM1, PM7, and ab intio methods, namely, Hartree-Fock and DFT. The developed QSPRs are validated using state-of-the-art external validation procedures employing an external prediction set. From the comparison of external predictivity of the models, it is observed that the single-descriptor QSPRs developed using total energy and correlation energy are found to be far more robust and predictive than those developed using commonly employed descriptors such as HOMO/LUMO energy and dipole moment. The work proposes that if real external predictivity of a QSPR model is desired to be explored, particularly, in terms of intra-molecular interactions, correlation-energy serves as a more appropriate descriptor than the polarizability. However, for developing QSPRs, computationally inexpensive advanced SE methods such as PM7 can be more reliable than

  11. A Simple Inquiry-Based Lab for Teaching Osmosis

    Science.gov (United States)

    Taylor, John R.

    2014-01-01

    This simple inquiry-based lab was designed to teach the principle of osmosis while also providing an experience for students to use the skills and practices commonly found in science. Students first design their own experiment using very basic equipment and supplies, which generally results in mixed, but mostly poor, outcomes. Classroom "talk…

  12. Assessment of soil-structure interaction effects based on simple modes

    International Nuclear Information System (INIS)

    Philippacopoulos, A.J.; Miller, C.A.

    1983-01-01

    Soil-structure interaction effects are investigated using a simple mathematical model which employs three degrees-of-freedom. The foundation is approximated by a homogeneous, isotropic, elastic half-space. Harmonic functions and a recorded earthquake are used to represent the free-field input motion. Variations of the response characteristics due to structural and interaction parameters are demonstrated. Response spectra are presented that display the magnitude of the maximum structural response for a range of fixed-base structural frequencies, interaction frequencies and damping. Conclusions are obtained regarding the behavior of the response of the soil-structure system. The findings reported herein can be used for the interpretation of the results of soil-structure interaction analyses of nuclear plant structures that are performed with available computer codes

  13. A simple clot based assay for detection of procoagulant cell-derived microparticles.

    Science.gov (United States)

    Patil, Rucha; Ghosh, Kanjaksha; Shetty, Shrimati

    2016-05-01

    Cell-derived microparticles (MPs) are important biomarkers in many facets of medicine. However, the MP detection methods used till date are costly and time consuming. The main aim of this study was to standardize an in-house clot based screening method for MP detection which would not only be specific and sensitive, but also inexpensive. Four different methods of MP assessment were performed and the results correlated. Using the flow cytometry technique as the gold standard, 25 samples with normal phosphatidylserine (PS) expressing MP levels and 25 samples with elevated levels were selected, which was cross checked by the commercial STA Procoag PPL clotting time (CT) assay. A simple recalcification time and an in-house clot assay were the remaining two tests. The in-house test measures the CT after the addition of calcium chloride to MP rich plasma, following incubation with Russell viper venom and phospholipid free plasma. The CT obtained by the in-house assay significantly correlated with the results obtained by flow cytometry (R2=0.87, p<0.01). Though preliminary, the in-house assay seems to be efficient, inexpensive and promising. It could definitely be utilized routinely for procoagulant MP assessment in various clinical settings.

  14. Comparison of blood biochemics between acute myocardial infarction models with blood stasis and simple acute myocardial infarction models in rats

    International Nuclear Information System (INIS)

    Qu Shaochun; Yu Xiaofeng; Wang Jia; Zhou Jinying; Xie Haolin; Sui Dayun

    2010-01-01

    Objective: To construct the acute myocardial infarction models in rats with blood stasis and study the difference on blood biochemics between the acute myocardial infarction models with blood stasis and the simple acute myocardial infarction models. Methods: Wistar rats were randomly divided into control group, acute blood stasis model group, acute myocardial infarction sham operation group, acute myocardial infarction model group and of acute myocardial infarction model with blood stasis group. The acute myocardial infarction models under the status of the acute blood stasis in rats were set up. The serum malondialdehyde (MDA), nitric oxide (NO), free fatty acid (FFA), tumor necrosis factor-α (TNF-α) levels were detected, the activities of serum superoxide dismutase (SOD), glutathione peroxidase (GSH-Px) and the levels of prostacycline (PGI2), thromboxane A 2 (TXA 2 ) and endothelin (ET) in plasma were determined. Results: There were not obvious differences in MDA, SOD, GSH-Px and FFA between the acute myocardial infarction models with blood stasis in rats and the simple acute myocardial infarction models (P 2 and NO, and the increase extents of TXA 2 , ET and TNF-α in the acute myocardial infarction models in rats with blood stasis were higher than those in the simple acute myocardial infarction models (P 2 and NO, are significant when the acute myocardial infarction models in rats with blood stasis and the simple acute myocardial infarction models are compared. The results show that it is defective to evaluate pharmacodynamics of traditional Chinese drug with only simple acute myocardial infarction models. (authors)

  15. Genomic Model with Correlation Between Additive and Dominance Effects.

    Science.gov (United States)

    Xiang, Tao; Christensen, Ole Fredslund; Vitezica, Zulma Gladis; Legarra, Andres

    2018-05-09

    Dominance genetic effects are rarely included in pedigree-based genetic evaluation. With the availability of single nucleotide polymorphism markers and the development of genomic evaluation, estimates of dominance genetic effects have become feasible using genomic best linear unbiased prediction (GBLUP). Usually, studies involving additive and dominance genetic effects ignore possible relationships between them. It has been often suggested that the magnitude of functional additive and dominance effects at the quantitative trait loci are related, but there is no existing GBLUP-like approach accounting for such correlation. Wellmann and Bennewitz showed two ways of considering directional relationships between additive and dominance effects, which they estimated in a Bayesian framework. However, these relationships cannot be fitted at the level of individuals instead of loci in a mixed model and are not compatible with standard animal or plant breeding software. This comes from a fundamental ambiguity in assigning the reference allele at a given locus. We show that, if there has been selection, assigning the most frequent as the reference allele orients the correlation between functional additive and dominance effects. As a consequence, the most frequent reference allele is expected to have a positive value. We also demonstrate that selection creates negative covariance between genotypic additive and dominance genetic values. For parameter estimation, it is possible to use a combined additive and dominance relationship matrix computed from marker genotypes, and to use standard restricted maximum likelihood (REML) algorithms based on an equivalent model. Through a simulation study, we show that such correlations can easily be estimated by mixed model software and accuracy of prediction for genetic values is slightly improved if such correlations are used in GBLUP. However, a model assuming uncorrelated effects and fitting orthogonal breeding values and dominant

  16. The attentional drift-diffusion model extends to simple purchasing decisions

    Directory of Open Access Journals (Sweden)

    Ian eKrajbich

    2012-06-01

    Full Text Available How do we make simple purchasing decisions (e.g., whether or not to buy a product ata given price? Previous work has shown that the Attentional-Drift-Diffusion-Model (aDDMcan provide accurate descriptions of the psychometric data for binary and trinary value-based choices, and of how the choice process is guided by visual attention. However, the computational processes used to make purchasing decisions are unknown. Here we extend the aDDM to the case of purchasing decisions, and test it using an eye-tracking experiment. We find that the model provides a quantitatively accurate description of the relationship between choice, reaction time, and visual fixations using parameters that are very similar to those that best fit the previous data. The only critical difference is that the choice biases induced by the fixations are about half as big in purchasing decisions as in binary choices.This suggests that the brain uses similar computational processes in these varied decision situations.

  17. Green Grown- Bases of Bioeconomy Models in Correlation with Danubian Projects

    Directory of Open Access Journals (Sweden)

    Iudith Ipate

    2015-08-01

    Full Text Available The objective of the our study is to provide defined measureable indicators at the Romanian region level that to possibility over time transition to low carbon-economic and developed the bio-economic models in correlation with economic development in this areas. For implementing the global bio-economy models is important to develop the innovative research in cooperation with regional entrepreneur. In this context must to implementing the projects in cooperation with local authority, promoting common action to overcome the physical and socio-cultural barriers, and to better exploit the opportunities offered by the development of the cross-border area for a mid-longterm sustainable growth. In our research we developed one model using the series from water exploitation index (WEI, GDP per capita, CO2 emissions per capita (2emision and population growth – POP.

  18. Test Model for Dynamic Characteristics of a Cantilevered Simple Cylindrical Structure Submerged in a Liquid

    International Nuclear Information System (INIS)

    Park, Chang Gyu; Kim, Tae Sung; Kim, Hoe Woong; Kim, Jong Bum

    2013-01-01

    A coolant free surface level is dependent on the operating conditions, and thus the fluid added mass caused by contacting sodium with the structure affects the dynamic characteristic of the UIS. In this study, a numerical analysis model was proposed and a feasibility study was performed through structural testing. The dynamic characteristics for a simple cylindrical structure simulating the UIS outer cylinder will be tested. Currently, the FE analyses were carried out to confirm the effect of water chamber structure on the natural frequency of the test model. The submerged condition of a UIS cylinder affects its natural frequency. A test model of a simple cylindrical structure was prepared to conduct a dynamic test, and each structure component of the test equipment may affect the natural frequency. A cup-shaped cylindrical structure was applied to develop the numerical analysis method for a structure submerged in water and it was verified through a structural test. With this numerical analysis model, the effect of the water chamber material for a simple cylindrical structure was studied. The candidate materials for water chamber were acryl and 316SS with different thicknesses. Both materials showed a higher natural frequency than the reference model. A water chamber made of 316SS with a thick wall gave a closer result to the reference natural frequency than an acryl chamber. The expected natural frequency of the test facility has about a 4% difference based on the reference value, considering a water chamber with a 1 cm thickness. This result will be verified through an ongoing future structural test activity

  19. A Comparison of Multidimensional Item Selection Methods in Simple and Complex Test Designs

    Directory of Open Access Journals (Sweden)

    Eren Halil ÖZBERK

    2017-03-01

    Full Text Available In contrast with the previous studies, this study employed various test designs (simple and complex which allow the evaluation of the overall ability score estimations across multiple real test conditions. In this study, four factors were manipulated, namely the test design, number of items per dimension, correlation between dimensions and item selection methods. Using the generated item and ability parameters, dichotomous item responses were generated in by using M3PL compensatory multidimensional IRT model with specified correlations. MCAT composite ability score accuracy was evaluated using absolute bias (ABSBIAS, correlation and the root mean square error (RMSE between true and estimated ability scores. The results suggest that the multidimensional test structure, number of item per dimension and correlation between dimensions had significant effect on item selection methods for the overall score estimations. For simple structure test design it was found that V1 item selection has the lowest absolute bias estimations for both long and short tests while estimating overall scores. As the model gets complex KL item selection method performed better than other two item selection method.

  20. Correlation between 2D and 3D flow curve modelling of DP steels using a microstructure-based RVE approach

    International Nuclear Information System (INIS)

    Ramazani, A.; Mukherjee, K.; Quade, H.; Prahl, U.; Bleck, W.

    2013-01-01

    A microstructure-based approach by means of representative volume elements (RVEs) is employed to evaluate the flow curve of DP steels using virtual tensile tests. Microstructures with different martensite fractions and morphologies are studied in two- and three-dimensional approaches. Micro sections of DP microstructures with various amounts of martensite have been converted to 2D RVEs, while 3D RVEs were constructed statistically with randomly distributed phases. A dislocation-based model is used to describe the flow curve of each ferrite and martensite phase separately as a function of carbon partitioning and microstructural features. Numerical tensile tests of RVE were carried out using the ABAQUS/Standard code to predict the flow behaviour of DP steels. It is observed that 2D plane strain modelling gives an underpredicted flow curve for DP steels, while the 3D modelling gives a quantitatively reasonable description of flow curve in comparison to the experimental data. In this work, a von Mises stress correlation factor σ 3D /σ 2D has been identified to compare the predicted flow curves of these two dimensionalities showing a third order polynomial relation with respect to martensite fraction and a second order polynomial relation with respect to equivalent plastic strain, respectively. The quantification of this polynomial correlation factor is performed based on laboratory-annealed DP600 chemistry with varying martensite content and it is validated for industrially produced DP qualities with various chemistry, strength level and martensite fraction.

  1. TACT: A Set of MSC/PATRAN- and MSC/NASTRAN- based Modal Correlation Tools

    Science.gov (United States)

    Marlowe, Jill M.; Dixon, Genevieve D.

    1998-01-01

    This paper describes the functionality and demonstrates the utility of the Test Analysis Correlation Tools (TACT), a suite of MSC/PATRAN Command Language (PCL) tools which automate the process of correlating finite element models to modal survey test data. The initial release of TACT provides a basic yet complete set of tools for performing correlation totally inside the PATRAN/NASTRAN environment. Features include a step-by-step menu structure, pre-test accelerometer set evaluation and selection, analysis and test result export/import in Universal File Format, calculation of frequency percent difference and cross-orthogonality correlation results using NASTRAN, creation and manipulation of mode pairs, and five different ways of viewing synchronized animations of analysis and test modal results. For the PATRAN-based analyst, TACT eliminates the repetitive, time-consuming and error-prone steps associated with transferring finite element data to a third-party modal correlation package, which allows the analyst to spend more time on the more challenging task of model updating. The usefulness of this software is presented using a case history, the correlation for a NASA Langley Research Center (LaRC) low aspect ratio research wind tunnel model. To demonstrate the improvements that TACT offers the MSC/PATRAN- and MSC/DIASTRAN- based structural analysis community, a comparison of the modal correlation process using TACT within PATRAN versus external third-party modal correlation packages is presented.

  2. Setting up and validating a complex model for a simple homogeneous wall

    DEFF Research Database (Denmark)

    Naveros, I.; Bacher, Peder; Ruiz, D. P.

    2014-01-01

    the regression averages method for estimation of parameters which describe the thermal behaviour of the wall. Solar irradiance and long-wave radiation balance terms are added in the heat balance equation besides modelling of wind speed effect to achieve a complete description of the relevant phenomena which......The present paper describes modelling of the thermal dynamics of a real wall tested in dynamic outdoor weather conditions, to identify all the parameters needed for its characterisation. Specifically, the U value, absorptance and effective heat capacity are estimated for the wall using grey......-box modelling based on statistical methods and known physical dynamic energy balance equations, related to the heat flux density through a simple and homogeneous wall. The experimental test was carried out in a hot-temperature climate for nine months. This study aims at proposing a dynamic method improving...

  3. Possible biomechanical origins of the long-range correlations in stride intervals of walking

    Science.gov (United States)

    Gates, Deanna H.; Su, Jimmy L.; Dingwell, Jonathan B.

    2007-07-01

    When humans walk, the time duration of each stride varies from one stride to the next. These temporal fluctuations exhibit long-range correlations. It has been suggested that these correlations stem from higher nervous system centers in the brain that control gait cycle timing. Existing proposed models of this phenomenon have focused on neurophysiological mechanisms that might give rise to these long-range correlations, and generally ignored potential alternative mechanical explanations. We hypothesized that a simple mechanical system could also generate similar long-range correlations in stride times. We modified a very simple passive dynamic model of bipedal walking to incorporate forward propulsion through an impulsive force applied to the trailing leg at each push-off. Push-off forces were varied from step to step by incorporating both “sensory” and “motor” noise terms that were regulated by a simple proportional feedback controller. We generated 400 simulations of walking, with different combinations of sensory noise, motor noise, and feedback gain. The stride time data from each simulation were analyzed using detrended fluctuation analysis to compute a scaling exponent, α. This exponent quantified how each stride interval was correlated with previous and subsequent stride intervals over different time scales. For different variations of the noise terms and feedback gain, we obtained short-range correlations (α1.0). Our results indicate that a simple biomechanical model of walking can generate long-range correlations and thus perhaps these correlations are not a complex result of higher level neuronal control, as has been previously suggested.

  4. A simple model of bipartite cooperation for ecological and organizational networks.

    Science.gov (United States)

    Saavedra, Serguei; Reed-Tsochas, Felix; Uzzi, Brian

    2009-01-22

    In theoretical ecology, simple stochastic models that satisfy two basic conditions about the distribution of niche values and feeding ranges have proved successful in reproducing the overall structural properties of real food webs, using species richness and connectance as the only input parameters. Recently, more detailed models have incorporated higher levels of constraint in order to reproduce the actual links observed in real food webs. Here, building on previous stochastic models of consumer-resource interactions between species, we propose a highly parsimonious model that can reproduce the overall bipartite structure of cooperative partner-partner interactions, as exemplified by plant-animal mutualistic networks. Our stochastic model of bipartite cooperation uses simple specialization and interaction rules, and only requires three empirical input parameters. We test the bipartite cooperation model on ten large pollination data sets that have been compiled in the literature, and find that it successfully replicates the degree distribution, nestedness and modularity of the empirical networks. These properties are regarded as key to understanding cooperation in mutualistic networks. We also apply our model to an extensive data set of two classes of company engaged in joint production in the garment industry. Using the same metrics, we find that the network of manufacturer-contractor interactions exhibits similar structural patterns to plant-animal pollination networks. This surprising correspondence between ecological and organizational networks suggests that the simple rules of cooperation that generate bipartite networks may be generic, and could prove relevant in many different domains, ranging from biological systems to human society.

  5. Segment-based Eyring-Wilson viscosity model for polymer solutions

    International Nuclear Information System (INIS)

    Sadeghi, Rahmat

    2005-01-01

    A theory-based model is presented for correlating viscosity of polymer solutions and is based on the segment-based Eyring mixture viscosity model as well as the segment-based Wilson model for describing deviations from ideality. The model has been applied to several polymer solutions and the results show that it is reliable both for correlation and prediction of the viscosity of polymer solutions at different molar masses and temperature of the polymer

  6. Simple model with damping of the mode-coupling instability

    Energy Technology Data Exchange (ETDEWEB)

    Pestrikov, D V [AN SSSR, Novosibirsk (Russian Federation). Inst. Yadernoj Fiziki

    1996-08-01

    In this paper we use a simple model to study the suppression of the transverse mode-coupling instability. Two possibilities are considered. One is due to the damping of particular synchrobetatron modes, and another - due to Landau damping, caused by the nonlinearity of betatron oscillations. (author)

  7. Improved decryption quality and security of a joint transform correlator-based encryption system

    Science.gov (United States)

    Vilardy, Juan M.; Millán, María S.; Pérez-Cabré, Elisabet

    2013-02-01

    Some image encryption systems based on modified double random phase encoding and joint transform correlator architecture produce low quality decrypted images and are vulnerable to a variety of attacks. In this work, we analyse the algorithm of some reported methods that optically implement the double random phase encryption in a joint transform correlator. We show that it is possible to significantly improve the quality of the decrypted image by introducing a simple nonlinear operation in the encrypted function that contains the joint power spectrum. This nonlinearity also makes the system more resistant to chosen-plaintext attacks. We additionally explore the system resistance against this type of attack when a variety of probability density functions are used to generate the two random phase masks of the encryption-decryption process. Numerical results are presented and discussed.

  8. Improved decryption quality and security of a joint transform correlator-based encryption system

    International Nuclear Information System (INIS)

    Vilardy, Juan M; Millán, María S; Pérez-Cabré, Elisabet

    2013-01-01

    Some image encryption systems based on modified double random phase encoding and joint transform correlator architecture produce low quality decrypted images and are vulnerable to a variety of attacks. In this work, we analyse the algorithm of some reported methods that optically implement the double random phase encryption in a joint transform correlator. We show that it is possible to significantly improve the quality of the decrypted image by introducing a simple nonlinear operation in the encrypted function that contains the joint power spectrum. This nonlinearity also makes the system more resistant to chosen-plaintext attacks. We additionally explore the system resistance against this type of attack when a variety of probability density functions are used to generate the two random phase masks of the encryption–decryption process. Numerical results are presented and discussed. (paper)

  9. Four-body correlation embedded in antisymmetrized geminal power wave function.

    Science.gov (United States)

    Kawasaki, Airi; Sugino, Osamu

    2016-12-28

    We extend the Coleman's antisymmetrized geminal power (AGP) to develop a wave function theory that can incorporate up to four-body correlation in a region of strong correlation. To facilitate the variational determination of the wave function, the total energy is rewritten in terms of the traces of geminals. This novel trace formula is applied to a simple model system consisting of one dimensional Hubbard ring with a site of strong correlation. Our scheme significantly improves the result obtained by the AGP-configuration interaction scheme of Uemura et al. and also achieves more efficient compression of the degrees of freedom of the wave function. We regard the result as a step toward a first-principles wave function theory for a strongly correlated point defect or adsorbate embedded in an AGP-based mean-field medium.

  10. Use of Paired Simple and Complex Models to Reduce Predictive Bias and Quantify Uncertainty

    DEFF Research Database (Denmark)

    Doherty, John; Christensen, Steen

    2011-01-01

    -constrained uncertainty analysis. Unfortunately, however, many system and process details on which uncertainty may depend are, by design, omitted from simple models. This can lead to underestimation of the uncertainty associated with many predictions of management interest. The present paper proposes a methodology...... of these details born of the necessity for model outputs to replicate observations of historical system behavior. In contrast, the rapid run times and general numerical reliability of simple models often promulgates good calibration and ready implementation of sophisticated methods of calibration...... that attempts to overcome the problems associated with complex models on the one hand and simple models on the other hand, while allowing access to the benefits each of them offers. It provides a theoretical analysis of the simplification process from a subspace point of view, this yielding insights...

  11. Structure of simple liquids; Structure des liquides simples

    Energy Technology Data Exchange (ETDEWEB)

    Blain, J F [Commissariat a l' Energie Atomique, Fontenay-aux-Roses (France). Centre d' Etudes Nucleaires

    1969-07-01

    The results obtained by application to argon and sodium of the two important methods of studying the structure of liquids: scattering of X-rays and neutrons, are presented on one hand. On the other hand the principal models employed for reconstituting the structure of simple liquids are exposed: mathematical models, lattice models and their derived models, experimental models. (author) [French] On presente d'une part les resultats obtenus par application a l'argon et au sodium des deux principales methodes d'etude de la structure des liquides: la diffusion des rayons X et la diffusion des neutrons; d'autre part, les principaux modeles employes pour reconstituer la structure des liquides simples sont exposes: modeles mathematiques, modeles des reseaux et modeles derives, modeles experimentaux. (auteur)

  12. A simple model for behaviour change in epidemics

    Directory of Open Access Journals (Sweden)

    Brauer Fred

    2011-02-01

    Full Text Available Abstract Background People change their behaviour during an epidemic. Infectious members of a population may reduce the number of contacts they make with other people because of the physical effects of their illness and possibly because of public health announcements asking them to do so in order to decrease the number of new infections, while susceptible members of the population may reduce the number of contacts they make in order to try to avoid becoming infected. Methods We consider a simple epidemic model in which susceptible and infectious members respond to a disease outbreak by reducing contacts by different fractions and analyze the effect of such contact reductions on the size of the epidemic. We assume constant fractional reductions, without attempting to consider the way in which susceptible members might respond to information about the epidemic. Results We are able to derive upper and lower bounds for the final size of an epidemic, both for simple and staged progression models. Conclusions The responses of uninfected and infected individuals in a disease outbreak are different, and this difference affects estimates of epidemic size.

  13. The impact of electrostatic correlations on Dielectrophoresis of Non-conducting Particles

    Science.gov (United States)

    Alidoosti, Elaheh; Zhao, Hui

    2017-11-01

    The dipole moment of a charged, dielectric, spherical particle under the influence of a uniform alternating electric field is computed theoretically and numerically by solving the modified continuum Poisson-Nernst-Planck (PNP) equations accounting for ion-ion electrostatic correlations that is important at concentrated electrolytes (Phys. Rev. Lett. 106, 2011). The dependence on the frequency, zeta potential, electrostatic correlation lengths, and double layer thickness is thoroughly investigated. In the limit of thin double layers, we carry out asymptotic analysis to develop simple models which are in good agreement with the modified PNP model. Our results suggest that the electrostatic correlations have a complicated impact on the dipole moment. As the electrostatic correlations length increases, the dipole moment decreases, initially, reach a minimum, and then increases since the surface conduction first decreases and then increases due to the ion-ion correlations. The modified PNP model can improve the theoretical predictions particularly at low frequencies where the simple model can't qualitatively predict the dipole moment. This work was supported, in part, by NIH R15GM116039.

  14. Tuning SISO offset-free Model Predictive Control based on ARX models

    DEFF Research Database (Denmark)

    Huusom, Jakob Kjøbsted; Poulsen, Niels Kjølstad; Jørgensen, Sten Bay

    2012-01-01

    , the proposed controller is simple to tune as it has only one free tuning parameter. These two features are advantageous in predictive process control as they simplify industrial commissioning of MPC. Disturbance rejection and offset-free control is important in industrial process control. To achieve offset......In this paper, we present a tuning methodology for a simple offset-free SISO Model Predictive Controller (MPC) based on autoregressive models with exogenous inputs (ARX models). ARX models simplify system identification as they can be identified from data using convex optimization. Furthermore......-free control in face of unknown disturbances or model-plant mismatch, integrators must be introduced in either the estimator or the regulator. Traditionally, offset-free control is achieved using Brownian disturbance models in the estimator. In this paper we achieve offset-free control by extending the noise...

  15. Simple statistical channel model for weak temperature-induced turbulence in underwater wireless optical communication systems

    KAUST Repository

    Oubei, Hassan M.

    2017-06-16

    In this Letter, we use laser beam intensity fluctuation measurements to model and describe the statistical properties of weak temperature-induced turbulence in underwater wireless optical communication (UWOC) channels. UWOC channels with temperature gradients are modeled by the generalized gamma distribution (GGD) with an excellent goodness of fit to the measured data under all channel conditions. Meanwhile, thermally uniform channels are perfectly described by the simple gamma distribution which is a special case of GGD. To the best of our knowledge, this is the first model that comprehensively describes both thermally uniform and gradient-based UWOC channels.

  16. Prediction of speech intelligibility based on a correlation metric in the envelope power spectrum domain

    DEFF Research Database (Denmark)

    Relano-Iborra, Helia; May, Tobias; Zaar, Johannes

    A powerful tool to investigate speech perception is the use of speech intelligibility prediction models. Recently, a model was presented, termed correlation-based speechbased envelope power spectrum model (sEPSMcorr) [1], based on the auditory processing of the multi-resolution speech-based Envel...

  17. Correlation between acoustical and structural properties of glasses: Extension of Abd El-Moneim model for bioactive silica based glasses

    Energy Technology Data Exchange (ETDEWEB)

    Abd El-Moneim, Amin, E-mail: aminabdelmoneim@hotmail.com

    2016-04-15

    Correlation between room temperature ultrasonic attenuation coefficient and the most significant structural parameters has been studied in the bioactive silica based glasses, for the first time. The correlation has been carried out in the quaternary SiO{sub 2}–Na{sub 2}O–CaO–P{sub 2}O{sub 5} glass system using the two semi-empirical formulas, which have been presented recently by the author. Changes in the elastic properties, related to the substitution of SiO{sub 2} by alkali Na{sub 2}O and alkaline earth CaO oxides, have also been deduced by evaluating the mean atomic volume, packing density, fractal bond connectivity and density of the analogous crystalline structure. Furthermore, values of the theoretical elastic moduli have been calculated on the basis of Makishima-Mackenzie theory and compared with the corresponding observed values. Results show that the correlation between ultrasonic attenuation coefficient and the oxygen density, average atomic ring size, first-order stretching force constant and experimental bulk modulus was achieved at 5 MHz frequency. Values of the theoretically calculated shear modulus are in excellent correlation (C. R. ≻95%) with the corresponding experimental ones. The divergence between the theoretical and experimental values of bulk modulus has been discussed. - Highlights: • Abd El-Moneim model was extended for bioactive glasses. • Ultrasonic attenuation was correlated with structural parameters. • Correlation was carried out in Si–Na–Ca–P glasses. • The model is valid for all investigated glass samples. • Agreement between theoretical and experimental elastic moduli was studied.

  18. A Simple Model to Identify Risk of Sarcopenia and Physical Disability in HIV-Infected Patients.

    Science.gov (United States)

    Farinatti, Paulo; Paes, Lorena; Harris, Elizabeth A; Lopes, Gabriella O; Borges, Juliana P

    2017-09-01

    Farinatti, P, Paes, L, Harris, EA, Lopes, GO, and Borges, JP. A simple model to identify risk of sarcopenia and physical disability in HIV-infected patients. J Strength Cond Res 31(9): 2542-2551, 2017-Early detection of sarcopenia might help preventing muscle loss and disability in HIV-infected patients. This study proposed a model for estimating appendicular skeletal muscle mass (ASM) to calculate indices to identify "sarcopenia" (SA) and "risk for disability due to sarcopenia" (RSA) in patients with HIV. An equation to estimate ASM was developed in 56 patients (47.2 ± 6.9 years), with a cross-validation sample of 24 patients (48.1 ± 6.6 years). The model validity was determined by calculating, in both samples: (a) Concordance between actual vs. estimated ASM; (b) Correlations between actual/estimated ASM vs. peak torque (PT) and total work (TW) during isokinetic knee extension/flexion; (c) Agreement of patients classified with SA and RSA. The predictive equation was ASM (kg) = 7.77 (sex; F = 0/M = 1) + 0.26 (arm circumference; cm) + 0.38 (thigh circumference; cm) + 0.03 (Body Mass Index; kg·m) - 8.94 (R = 0.74; Radj = 0.72; SEE = 3.13 kg). Agreement between actual vs. estimated ASM was confirmed in validation (t = 0.081/p = 0.94; R = 0.86/p < 0.0001) and cross-validation (t = 0.12/p = 0.92; R = 0.87/p < 0.0001) samples. Regression characteristics in cross-validation sample (Radj = 0.80; SEE = 3.65) and PRESS (RPRESS = 0.69; SEEPRESS = 3.35) were compatible with the original model. Percent agreements for the classification of SA and RSA from indices calculated using actual and estimated ASM were of 87.5% and 77.2% (gamma correlations 0.72-1.0; p < 0.04) in validation, and 95.8% and 75.0% (gamma correlations 0.98-0.97; p < 0.001) in cross-validation sample, respectively. Correlations between actual/estimated ASM vs. PT (range 0.50-0.73, p ≤ 0.05) and TW (range 0.59-0.74, p ≤ 0.05) were similar in both samples. In conclusion, our model correctly estimated ASM

  19. The importance of parameter variances, correlations lengths, and cross-correlations in reactive transport models: key considerations for assessing the need for microscale information

    Energy Technology Data Exchange (ETDEWEB)

    Reimus, Paul W [Los Alamos National Laboratory

    2010-12-08

    A process-oriented modeling approach is implemented to examine the importance of parameter variances, correlation lengths, and especially cross-correlations in contaminant transport predictions over large scales. It is shown that the most important consideration is the correlation between flow rates and retardation processes (e.g., sorption, matrix diffusion) in the system. lf flow rates are negatively correlated with retardation factors in systems containing multiple flow pathways, then characterizing these negative correlation(s) may have more impact on reactive transport modeling than microscale information. Such negative correlations are expected in porous-media systems where permeability is negatively correlated with clay content and rock alteration (which are usually associated with increased sorption). Likewise, negative correlations are expected in fractured rocks where permeability is positively correlated with fracture apertures, which in turn are negatively correlated with sorption and matrix diffusion. Parameter variances and correlation lengths are also shown to have important effects on reactive transport predictions, but they are less important than parameter cross-correlations. Microscale information pertaining to contaminant transport has become more readily available as characterization methods and spectroscopic instrumentation have achieved lower detection limits, greater resolution, and better precision. Obtaining detailed mechanistic insights into contaminant-rock-water interactions is becoming a routine practice in characterizing reactive transport processes in groundwater systems (almost necessary for high-profile publications). Unfortunately, a quantitative link between microscale information and flow and transport parameter distributions or cross-correlations has not yet been established. One reason for this is that quantitative microscale information is difficult to obtain in complex, heterogeneous systems. So simple systems that lack the

  20. Research of diagnosis sensors fault based on correlation analysis of the bridge structural health monitoring system

    Science.gov (United States)

    Hu, Shunren; Chen, Weimin; Liu, Lin; Gao, Xiaoxia

    2010-03-01

    Bridge structural health monitoring system is a typical multi-sensor measurement system due to the multi-parameters of bridge structure collected from the monitoring sites on the river-spanning bridges. Bridge structure monitored by multi-sensors is an entity, when subjected to external action; there will be different performances to different bridge structure parameters. Therefore, the data acquired by each sensor should exist countless correlation relation. However, complexity of the correlation relation is decided by complexity of bridge structure. Traditionally correlation analysis among monitoring sites is mainly considered from physical locations. unfortunately, this method is so simple that it cannot describe the correlation in detail. The paper analyzes the correlation among the bridge monitoring sites according to the bridge structural data, defines the correlation of bridge monitoring sites and describes its several forms, then integrating the correlative theory of data mining and signal system to establish the correlation model to describe the correlation among the bridge monitoring sites quantificationally. Finally, The Chongqing Mashangxi Yangtze river bridge health measurement system is regards as research object to diagnosis sensors fault, and simulation results verify the effectiveness of the designed method and theoretical discussions.

  1. Human mammary epithelial cells exhibit a bimodal correlated random walk pattern.

    Science.gov (United States)

    Potdar, Alka A; Jeon, Junhwan; Weaver, Alissa M; Quaranta, Vito; Cummings, Peter T

    2010-03-10

    Organisms, at scales ranging from unicellular to mammals, have been known to exhibit foraging behavior described by random walks whose segments confirm to Lévy or exponential distributions. For the first time, we present evidence that single cells (mammary epithelial cells) that exist in multi-cellular organisms (humans) follow a bimodal correlated random walk (BCRW). Cellular tracks of MCF-10A pBabe, neuN and neuT random migration on 2-D plastic substrates, analyzed using bimodal analysis, were found to reveal the BCRW pattern. We find two types of exponentially distributed correlated flights (corresponding to what we refer to as the directional and re-orientation phases) each having its own correlation between move step-lengths within flights. The exponential distribution of flight lengths was confirmed using different analysis methods (logarithmic binning with normalization, survival frequency plots and maximum likelihood estimation). Because of the presence of non-uniform turn angle distribution of move step-lengths within a flight and two different types of flights, we propose that the epithelial random walk is a BCRW comprising of two alternating modes with varying degree of correlations, rather than a simple persistent random walk. A BCRW model rather than a simple persistent random walk correctly matches the super-diffusivity in the cell migration paths as indicated by simulations based on the BCRW model.

  2. Human mammary epithelial cells exhibit a bimodal correlated random walk pattern.

    Directory of Open Access Journals (Sweden)

    Alka A Potdar

    2010-03-01

    Full Text Available Organisms, at scales ranging from unicellular to mammals, have been known to exhibit foraging behavior described by random walks whose segments confirm to Lévy or exponential distributions. For the first time, we present evidence that single cells (mammary epithelial cells that exist in multi-cellular organisms (humans follow a bimodal correlated random walk (BCRW.Cellular tracks of MCF-10A pBabe, neuN and neuT random migration on 2-D plastic substrates, analyzed using bimodal analysis, were found to reveal the BCRW pattern. We find two types of exponentially distributed correlated flights (corresponding to what we refer to as the directional and re-orientation phases each having its own correlation between move step-lengths within flights. The exponential distribution of flight lengths was confirmed using different analysis methods (logarithmic binning with normalization, survival frequency plots and maximum likelihood estimation.Because of the presence of non-uniform turn angle distribution of move step-lengths within a flight and two different types of flights, we propose that the epithelial random walk is a BCRW comprising of two alternating modes with varying degree of correlations, rather than a simple persistent random walk. A BCRW model rather than a simple persistent random walk correctly matches the super-diffusivity in the cell migration paths as indicated by simulations based on the BCRW model.

  3. A validation of a simple model for the calculation of the ionization energies in X-ray laser-cluster interactions

    Energy Technology Data Exchange (ETDEWEB)

    White, Jeff; Ackad, Edward [Department of Physics, Southern Illinois University Edwardsville, Edwardsville, Illinois 62026 (United States)

    2015-02-15

    The outer-ionization of an electron from a cluster is an unambiguous quantity, while the inner-ionization threshold is not, resulting in different microscopic quantum-classical hybrid models used in laser-cluster interactions. A simple local ionization threshold model for the change in the ionization energy is proposed and examined, for atoms and ions, at distances in between the initial configuration of the cluster to well into the cluster's disintegration. This model is compared with a full Hartree-Fock energy calculation which accounts for the electron correlation effects using the coupled cluster method with single and double excitations with perturbative triples (CCSD(T)). Good agreement is found between the two lending a strong theoretical support to works which rely on such models for the final and transient properties of the laser-cluster interaction.

  4. How to make sense of the jet correlations results at RHIC?

    International Nuclear Information System (INIS)

    Jia, Jiangyong

    2009-01-01

    We review the di-hadron correlation results from RHIC. A consistent physical picture was constructed based on the correlation landscape in p T , Δφ, Δη and particle species. We show that the data are consistent with competition between fragmentation of survived jets and response of the medium to quenched jets. At intermediate p T where the medium response are important, a large fraction of trigger hadrons do not come from jet fragmentation. We argue that these hadrons can strongly influence the interpretation of the low p T correlation data. We demonstrate this point through a simple geometrical jet absorption model simulation. The model shows that the correlation between medium response hadrons dominates the pair yield and mimics the double hump structure of the away-side Δφ distribution at low p T . This correlation was also shown to lead to complications in interpreting the results on reaction plane dependence and three particle correlations. Finally, we briefly discuss several related experimental issues which are important for proper interpretations of the experimental data. (orig.)

  5. A Simple, Realistic Stochastic Model of Gastric Emptying.

    Directory of Open Access Journals (Sweden)

    Jiraphat Yokrattanasak

    Full Text Available Several models of Gastric Emptying (GE have been employed in the past to represent the rate of delivery of stomach contents to the duodenum and jejunum. These models have all used a deterministic form (algebraic equations or ordinary differential equations, considering GE as a continuous, smooth process in time. However, GE is known to occur as a sequence of spurts, irregular both in size and in timing. Hence, we formulate a simple stochastic process model, able to represent the irregular decrements of gastric contents after a meal. The model is calibrated on existing literature data and provides consistent predictions of the observed variability in the emptying trajectories. This approach may be useful in metabolic modeling, since it describes well and explains the apparently heterogeneous GE experimental results in situations where common gastric mechanics across subjects would be expected.

  6. Two point function for a simple general relativistic quantum model

    OpenAIRE

    Colosi, Daniele

    2007-01-01

    We study the quantum theory of a simple general relativistic quantum model of two coupled harmonic oscillators and compute the two-point function following a proposal first introduced in the context of loop quantum gravity.

  7. Conditional Correlation Models of Autoregressive Conditional Heteroskedasticity with Nonstationary GARCH Equations

    DEFF Research Database (Denmark)

    Amado, Cristina; Teräsvirta, Timo

    -run and the short-run dynamic behaviour of the volatilities. The structure of the conditional correlation matrix is assumed to be either time independent or to vary over time. We apply our model to pairs of seven daily stock returns belonging to the S&P 500 composite index and traded at the New York Stock Exchange......In this paper we investigate the effects of careful modelling the long-run dynamics of the volatilities of stock market returns on the conditional correlation structure. To this end we allow the individual unconditional variances in Conditional Correlation GARCH models to change smoothly over time...... by incorporating a nonstationary component in the variance equations. The modelling technique to determine the parametric structure of this time-varying component is based on a sequence of specification Lagrange multiplier-type tests derived in Amado and Teräsvirta (2011). The variance equations combine the long...

  8. THE DEVELOPING OF SIMPLE PROPS BASED ON GUIDED INQUIRY TO IMPROVE STUDENTS’ CRITICAL THINKING SKILL’S

    Directory of Open Access Journals (Sweden)

    Astuti Wijayanti

    2018-04-01

    Full Text Available This research is aimed to 1 develop simple sciences instrument device based on a guided inquiry to improve the skill of the students on critical thinking. 2 to know the appropriateness of physic instrument based on guided inquiry.3 to know increase critical thinking skill by using the appropriateness of this instruments. This research is research and development with 4D models. The average score in an aspect based on the whole result of the assessment by media expert is 99,33%, the material expert is 88,20%, peer reviewer 83,02%, and science teacher upon the simple props is 88,41%, then it can be stated as Good (B. There is an improvement in the student’s result when doing pretest and posttest. The average score of the students when doing pretest is 5,29, and posttest is 7,9. The level of critical thinking skill of the students is on the level medium because the criteria are 0,54.

  9. Exploring the role of internal friction in the dynamics of unfolded proteins using simple polymer models

    Science.gov (United States)

    Cheng, Ryan R.; Hawk, Alexander T.; Makarov, Dmitrii E.

    2013-02-01

    Recent experiments showed that the reconfiguration dynamics of unfolded proteins are often adequately described by simple polymer models. In particular, the Rouse model with internal friction (RIF) captures internal friction effects as observed in single-molecule fluorescence correlation spectroscopy (FCS) studies of a number of proteins. Here we use RIF, and its non-free draining analog, Zimm model with internal friction, to explore the effect of internal friction on the rate with which intramolecular contacts can be formed within the unfolded chain. Unlike the reconfiguration times inferred from FCS experiments, which depend linearly on the solvent viscosity, the first passage times to form intramolecular contacts are shown to display a more complex viscosity dependence. We further describe scaling relationships obeyed by contact formation times in the limits of high and low internal friction. Our findings provide experimentally testable predictions that can serve as a framework for the analysis of future studies of contact formation in proteins.

  10. Simple nuclear norm based algorithms for imputing missing data and forecasting in time series

    OpenAIRE

    Butcher, Holly Louise; Gillard, Jonathan William

    2017-01-01

    There has been much recent progress on the use of the nuclear norm for the so-called matrix completion problem (the problem of imputing missing values of a matrix). In this paper we investigate the use of the nuclear norm for modelling time series, with particular attention to imputing missing data and forecasting. We introduce a simple alternating projections type algorithm based on the nuclear norm for these tasks, and consider a number of practical examples.

  11. Numerical Simulation of the Heston Model under Stochastic Correlation

    Directory of Open Access Journals (Sweden)

    Long Teng

    2017-12-01

    Full Text Available Stochastic correlation models have become increasingly important in financial markets. In order to be able to price vanilla options in stochastic volatility and correlation models, in this work, we study the extension of the Heston model by imposing stochastic correlations driven by a stochastic differential equation. We discuss the efficient algorithms for the extended Heston model by incorporating stochastic correlations. Our numerical experiments show that the proposed algorithms can efficiently provide highly accurate results for the extended Heston by including stochastic correlations. By investigating the effect of stochastic correlations on the implied volatility, we find that the performance of the Heston model can be proved by including stochastic correlations.

  12. Simple model of surface roughness for binary collision sputtering simulations

    Energy Technology Data Exchange (ETDEWEB)

    Lindsey, Sloan J. [Institute of Solid-State Electronics, TU Wien, Floragasse 7, A-1040 Wien (Austria); Hobler, Gerhard, E-mail: gerhard.hobler@tuwien.ac.at [Institute of Solid-State Electronics, TU Wien, Floragasse 7, A-1040 Wien (Austria); Maciążek, Dawid; Postawa, Zbigniew [Institute of Physics, Jagiellonian University, ul. Lojasiewicza 11, 30348 Kraków (Poland)

    2017-02-15

    Highlights: • A simple model of surface roughness is proposed. • Its key feature is a linearly varying target density at the surface. • The model can be used in 1D/2D/3D Monte Carlo binary collision simulations. • The model fits well experimental glancing incidence sputtering yield data. - Abstract: It has been shown that surface roughness can strongly influence the sputtering yield – especially at glancing incidence angles where the inclusion of surface roughness leads to an increase in sputtering yields. In this work, we propose a simple one-parameter model (the “density gradient model”) which imitates surface roughness effects. In the model, the target’s atomic density is assumed to vary linearly between the actual material density and zero. The layer width is the sole model parameter. The model has been implemented in the binary collision simulator IMSIL and has been evaluated against various geometric surface models for 5 keV Ga ions impinging an amorphous Si target. To aid the construction of a realistic rough surface topography, we have performed MD simulations of sequential 5 keV Ga impacts on an initially crystalline Si target. We show that our new model effectively reproduces the sputtering yield, with only minor variations in the energy and angular distributions of sputtered particles. The success of the density gradient model is attributed to a reduction of the reflection coefficient – leading to increased sputtering yields, similar in effect to surface roughness.

  13. Simple model of surface roughness for binary collision sputtering simulations

    International Nuclear Information System (INIS)

    Lindsey, Sloan J.; Hobler, Gerhard; Maciążek, Dawid; Postawa, Zbigniew

    2017-01-01

    Highlights: • A simple model of surface roughness is proposed. • Its key feature is a linearly varying target density at the surface. • The model can be used in 1D/2D/3D Monte Carlo binary collision simulations. • The model fits well experimental glancing incidence sputtering yield data. - Abstract: It has been shown that surface roughness can strongly influence the sputtering yield – especially at glancing incidence angles where the inclusion of surface roughness leads to an increase in sputtering yields. In this work, we propose a simple one-parameter model (the “density gradient model”) which imitates surface roughness effects. In the model, the target’s atomic density is assumed to vary linearly between the actual material density and zero. The layer width is the sole model parameter. The model has been implemented in the binary collision simulator IMSIL and has been evaluated against various geometric surface models for 5 keV Ga ions impinging an amorphous Si target. To aid the construction of a realistic rough surface topography, we have performed MD simulations of sequential 5 keV Ga impacts on an initially crystalline Si target. We show that our new model effectively reproduces the sputtering yield, with only minor variations in the energy and angular distributions of sputtered particles. The success of the density gradient model is attributed to a reduction of the reflection coefficient – leading to increased sputtering yields, similar in effect to surface roughness.

  14. Application of Simple CFD Models in Smoke Ventilation Design

    DEFF Research Database (Denmark)

    Brohus, Henrik; Nielsen, Peter Vilhelm; la Cour-Harbo, Hans

    2004-01-01

    The paper examines the possibilities of using simple CFD models in practical smoke ventilation design. The aim is to assess if it is possible with a reasonable accuracy to predict the behaviour of smoke transport in case of a fire. A CFD code mainly applicable for “ordinary” ventilation design...

  15. Covariate-adjusted Spearman's rank correlation with probability-scale residuals.

    Science.gov (United States)

    Liu, Qi; Li, Chun; Wanga, Valentine; Shepherd, Bryan E

    2018-06-01

    It is desirable to adjust Spearman's rank correlation for covariates, yet existing approaches have limitations. For example, the traditionally defined partial Spearman's correlation does not have a sensible population parameter, and the conditional Spearman's correlation defined with copulas cannot be easily generalized to discrete variables. We define population parameters for both partial and conditional Spearman's correlation through concordance-discordance probabilities. The definitions are natural extensions of Spearman's rank correlation in the presence of covariates and are general for any orderable random variables. We show that they can be neatly expressed using probability-scale residuals (PSRs). This connection allows us to derive simple estimators. Our partial estimator for Spearman's correlation between X and Y adjusted for Z is the correlation of PSRs from models of X on Z and of Y on Z, which is analogous to the partial Pearson's correlation derived as the correlation of observed-minus-expected residuals. Our conditional estimator is the conditional correlation of PSRs. We describe estimation and inference, and highlight the use of semiparametric cumulative probability models, which allow preservation of the rank-based nature of Spearman's correlation. We conduct simulations to evaluate the performance of our estimators and compare them with other popular measures of association, demonstrating their robustness and efficiency. We illustrate our method in two applications, a biomarker study and a large survey. © 2017, The International Biometric Society.

  16. Big Data-Driven Based Real-Time Traffic Flow State Identification and Prediction

    Directory of Open Access Journals (Sweden)

    Hua-pu Lu

    2015-01-01

    Full Text Available With the rapid development of urban informatization, the era of big data is coming. To satisfy the demand of traffic congestion early warning, this paper studies the method of real-time traffic flow state identification and prediction based on big data-driven theory. Traffic big data holds several characteristics, such as temporal correlation, spatial correlation, historical correlation, and multistate. Traffic flow state quantification, the basis of traffic flow state identification, is achieved by a SAGA-FCM (simulated annealing genetic algorithm based fuzzy c-means based traffic clustering model. Considering simple calculation and predictive accuracy, a bilevel optimization model for regional traffic flow correlation analysis is established to predict traffic flow parameters based on temporal-spatial-historical correlation. A two-stage model for correction coefficients optimization is put forward to simplify the bilevel optimization model. The first stage model is built to calculate the number of temporal-spatial-historical correlation variables. The second stage model is present to calculate basic model formulation of regional traffic flow correlation. A case study based on a real-world road network in Beijing, China, is implemented to test the efficiency and applicability of the proposed modeling and computing methods.

  17. A simple but accurate procedure for solving the five-parameter model

    International Nuclear Information System (INIS)

    Mares, Oana; Paulescu, Marius; Badescu, Viorel

    2015-01-01

    Highlights: • A new procedure for extracting the parameters of the one-diode model is proposed. • Only the basic information listed in the datasheet of PV modules are required. • Results demonstrate a simple, robust and accurate procedure. - Abstract: The current–voltage characteristic of a photovoltaic module is typically evaluated by using a model based on the solar cell equivalent circuit. The complexity of the procedure applied for extracting the model parameters depends on data available in manufacture’s datasheet. Since the datasheet is not detailed enough, simplified models have to be used in many cases. This paper proposes a new procedure for extracting the parameters of the one-diode model in standard test conditions, using only the basic data listed by all manufactures in datasheet (short circuit current, open circuit voltage and maximum power point). The procedure is validated by using manufacturers’ data for six commercially crystalline silicon photovoltaic modules. Comparing the computed and measured current–voltage characteristics the determination coefficient is in the range 0.976–0.998. Thus, the proposed procedure represents a feasible tool for solving the five-parameter model applied to crystalline silicon photovoltaic modules. The procedure is described in detail, to guide potential users to derive similar models for other types of photovoltaic modules.

  18. Sound card based digital correlation detection of weak photoelectrical signals

    International Nuclear Information System (INIS)

    Tang Guanghui; Wang Jiangcheng

    2005-01-01

    A simple and low-cost digital correlation method is proposed to investigate weak photoelectrical signals, using a high-speed photodiode as detector, which is directly connected to a programmably triggered sound card analogue-to-digital converter and a personal computer. Two testing experiments, autocorrelation detection of weak flickering signals from a computer monitor under background of noisy outdoor stray light and cross-correlation measurement of the surface velocity of a motional tape, are performed, showing that the results are reliable and the method is easy to implement

  19. Chaos from simple models to complex systems

    CERN Document Server

    Cencini, Massimo; Vulpiani, Angelo

    2010-01-01

    Chaos: from simple models to complex systems aims to guide science and engineering students through chaos and nonlinear dynamics from classical examples to the most recent fields of research. The first part, intended for undergraduate and graduate students, is a gentle and self-contained introduction to the concepts and main tools for the characterization of deterministic chaotic systems, with emphasis to statistical approaches. The second part can be used as a reference by researchers as it focuses on more advanced topics including the characterization of chaos with tools of information theor

  20. Quantum correlated cluster mean-field theory applied to the transverse Ising model.

    Science.gov (United States)

    Zimmer, F M; Schmidt, M; Maziero, Jonas

    2016-06-01

    Mean-field theory (MFT) is one of the main available tools for analytical calculations entailed in investigations regarding many-body systems. Recently, there has been a surge of interest in ameliorating this kind of method, mainly with the aim of incorporating geometric and correlation properties of these systems. The correlated cluster MFT (CCMFT) is an improvement that succeeded quite well in doing that for classical spin systems. Nevertheless, even the CCMFT presents some deficiencies when applied to quantum systems. In this article, we address this issue by proposing the quantum CCMFT (QCCMFT), which, in contrast to its former approach, uses general quantum states in its self-consistent mean-field equations. We apply the introduced QCCMFT to the transverse Ising model in honeycomb, square, and simple cubic lattices and obtain fairly good results both for the Curie temperature of thermal phase transition and for the critical field of quantum phase transition. Actually, our results match those obtained via exact solutions, series expansions or Monte Carlo simulations.

  1. Alpha-particle detection based on the BJT detector and simple, IC-based readout electronics

    Energy Technology Data Exchange (ETDEWEB)

    Rovati, L; Bonaiuti, M [Dipartimento di Ingegneria dell' Informazione, Universita di Modena e Reggio Emilia, Modena (Italy); Bettarini, S [Dipartimento di Fisica, Universita di Pisa and INFN Pisa, Pisa (Italy); Bosisio, L [Dipartimento di Fisica, Universita di Trieste and INFN Trieste, Trieste (Italy); Dalla Betta, G-F; Tyzhnevyi, V [Dipartimento di Ingegneria e Scienza dell' Informazione, Universita di Trento e INFN Trento, Trento (Italy); Verzellesi, G [Dipartimento di Scienze e Metodi dell' Ingegneria, Universita di Modena e Reggio Emilia and INFN Trento, Reggio Emilia (Italy); Zorzi, N, E-mail: giovanni.verzellesi@unimore.i [Fondazione Bruno Kessler (FBK), Trento (Italy)

    2009-11-15

    In this paper we propose a portable instrument for alpha-particle detection based on a previously-developed BJT detector and a simple, IC-based readout electronics. Experimental tests of the BJT detector and readout electronics are reported. Numerical simulations are adopted to predict the performance enhancement achievable with optimized BJT detectors.

  2. Alpha-particle detection based on the BJT detector and simple, IC-based readout electronics

    International Nuclear Information System (INIS)

    Rovati, L; Bonaiuti, M; Bettarini, S; Bosisio, L; Dalla Betta, G-F; Tyzhnevyi, V; Verzellesi, G; Zorzi, N

    2009-01-01

    In this paper we propose a portable instrument for alpha-particle detection based on a previously-developed BJT detector and a simple, IC-based readout electronics. Experimental tests of the BJT detector and readout electronics are reported. Numerical simulations are adopted to predict the performance enhancement achievable with optimized BJT detectors.

  3. Three-dimensional location of target fish by monocular infrared imaging sensor based on a L-z correlation model

    Science.gov (United States)

    Lin, Kai; Zhou, Chao; Xu, Daming; Guo, Qiang; Yang, Xinting; Sun, Chuanheng

    2018-01-01

    Monitoring of fish behavior has drawn extensive attention in pharmacological research, water environmental assessment, bio-inspired robot design and aquaculture. Given that an infrared sensor is low cost, no illumination limitation and electromagnetic interference, interest in its use in behavior monitoring has grown considerably, especially in 3D trajectory monitoring to quantify fish behavior on the basis of near infrared absorption of water. However, precise position of vertical dimension (z) remains a challenge, which greatly impacts on infrared tracking system accuracy. Hence, an intensity (L) and coordinate (z) correlation model was proposed to overcome the limitation. In the modelling process, two cameras (top view and side view) were employed synchronously to identify the 3D coordinate of each fish (x-y and z, respectively), and the major challenges were the distortion caused by the perspective effect and the refraction at water boundaries. Therefore, a coordinate correction formulation was designed firstly for the calibration. Then the L-z correlation model was established based on Lambert's absorption law and statistical data analysis, and the model was estimated through monitoring 3D trajectories of four fishes during the day and night. Finally, variations of individuals and limits of the depth detection of the model were discussed. Compared with previous studies, the favorable prediction performance of the model is achieved for 3D trajectory monitoring, which could provide some inspirations for fish behavior monitoring, especially for nocturnal behavior study.

  4. Simple Regge pole model for Compton scattering of protons

    International Nuclear Information System (INIS)

    Saleem, M.; Fazal-e-Aleem

    1978-01-01

    It is shown that by a phenomenological choice of the residue functions, the differential cross section for ν p → ν p, including the very recent measurements up to - t=4.3 (GeV/c) 2 , can be explained at all measured energies greater than 2 GeV with simple Regge pole model

  5. Implementasi Perbandingan Metode Simple Additive Weighting Dengan Weighted Sum Model Dalam Pemilihan Siswa Berprestasi

    OpenAIRE

    Siregar, M. Fajrul Falah

    2015-01-01

    Good Performance Student Selection Program of MIN Tanjung Sari aims to increase students interest in learning. The selection is based on determined criterion. To assist the selection process, then a decision support system is needed. The method used is Simple Additive Weighting and Weighted Sum Model. In this research the results of both methods performed will be tested with the three periods of good performance students data possessed by MIN Tanjung Sari Medan Selayang. This s...

  6. Simple prognostic model for patients with advanced cancer based on performance status.

    Science.gov (United States)

    Jang, Raymond W; Caraiscos, Valerie B; Swami, Nadia; Banerjee, Subrata; Mak, Ernie; Kaya, Ebru; Rodin, Gary; Bryson, John; Ridley, Julia Z; Le, Lisa W; Zimmermann, Camilla

    2014-09-01

    Providing survival estimates is important for decision making in oncology care. The purpose of this study was to provide survival estimates for outpatients with advanced cancer, using the Eastern Cooperative Oncology Group (ECOG), Palliative Performance Scale (PPS), and Karnofsky Performance Status (KPS) scales, and to compare their ability to predict survival. ECOG, PPS, and KPS were completed by physicians for each new patient attending the Princess Margaret Cancer Centre outpatient Oncology Palliative Care Clinic (OPCC) from April 2007 to February 2010. Survival analysis was performed using the Kaplan-Meier method. The log-rank test for trend was employed to test for differences in survival curves for each level of performance status (PS), and the concordance index (C-statistic) was used to test the predictive discriminatory ability of each PS measure. Measures were completed for 1,655 patients. PS delineated survival well for all three scales according to the log-rank test for trend (P statistic was similar for all three scales and ranged from 0.63 to 0.64. We present a simple tool that uses PS alone to prognosticate in advanced cancer, and has similar discriminatory ability to more complex models. Copyright © 2014 by American Society of Clinical Oncology.

  7. Simple and robust image-based autofocusing for digital microscopy.

    Science.gov (United States)

    Yazdanfar, Siavash; Kenny, Kevin B; Tasimi, Krenar; Corwin, Alex D; Dixon, Elizabeth L; Filkins, Robert J

    2008-06-09

    A simple image-based autofocusing scheme for digital microscopy is demonstrated that uses as few as two intermediate images to bring the sample into focus. The algorithm is adapted to a commercial inverted microscope and used to automate brightfield and fluorescence imaging of histopathology tissue sections.

  8. Boundary correlators in supergroup WZNW models

    Energy Technology Data Exchange (ETDEWEB)

    Creutzig, T.; Schomerus, V.

    2008-04-15

    We investigate correlation functions for maximally symmetric boundary conditions in the WZNW model on GL(11). Special attention is payed to volume filling branes. Generalizing earlier ideas for the bulk sector, we set up a Kac-Wakimotolike formalism for the boundary model. This first order formalism is then used to calculate bulk-boundary 2-point functions and the boundary 3-point functions of the model. The note ends with a few comments on correlation functions of atypical fields, point-like branes and generalizations to other supergroups. (orig.)

  9. A Bayes linear Bayes method for estimation of correlated event rates.

    Science.gov (United States)

    Quigley, John; Wilson, Kevin J; Walls, Lesley; Bedford, Tim

    2013-12-01

    Typically, full Bayesian estimation of correlated event rates can be computationally challenging since estimators are intractable. When estimation of event rates represents one activity within a larger modeling process, there is an incentive to develop more efficient inference than provided by a full Bayesian model. We develop a new subjective inference method for correlated event rates based on a Bayes linear Bayes model under the assumption that events are generated from a homogeneous Poisson process. To reduce the elicitation burden we introduce homogenization factors to the model and, as an alternative to a subjective prior, an empirical method using the method of moments is developed. Inference under the new method is compared against estimates obtained under a full Bayesian model, which takes a multivariate gamma prior, where the predictive and posterior distributions are derived in terms of well-known functions. The mathematical properties of both models are presented. A simulation study shows that the Bayes linear Bayes inference method and the full Bayesian model provide equally reliable estimates. An illustrative example, motivated by a problem of estimating correlated event rates across different users in a simple supply chain, shows how ignoring the correlation leads to biased estimation of event rates. © 2013 Society for Risk Analysis.

  10. Simple Model-Free Controller for the Stabilization of Planetary Inverted Pendulum

    Directory of Open Access Journals (Sweden)

    Huanhuan Mai

    2012-01-01

    Full Text Available A simple model-free controller is presented for solving the nonlinear dynamic control problems. As an example of the problem, a planetary gear-type inverted pendulum (PIP is discussed. To control the inherently unstable system which requires real-time control responses, the design of a smart and simple controller is made necessary. The model-free controller proposed includes a swing-up controller part and a stabilization controller part; neither controller has any information about the PIP. Since the input/output scaling parameters of the fuzzy controller are highly sensitive, we use genetic algorithm (GA to obtain the optimal control parameters. The experimental results show the effectiveness and robustness of the present controller.

  11. Landau-Zener transitions and Dykhne formula in a simple continuum model

    Science.gov (United States)

    Dunham, Yujin; Garmon, Savannah

    The Landau-Zener model describing the interaction between two linearly driven discrete levels is useful in describing many simple dynamical systems; however, no system is completely isolated from the surrounding environment. Here we examine a generalizations of the original Landau-Zener model to study simple environmental influences. We consider a model in which one of the discrete levels is replaced with a energy continuum, in which we find that the survival probability for the initially occupied diabatic level is unaffected by the presence of the continuum. This result can be predicted by assuming that each step in the evolution for the diabatic state evolves independently according to the Landau-Zener formula, even in the continuum limit. We also show that, at least for the simplest model, this result can also be predicted with the natural generalization of the Dykhne formula for open systems. We also observe dissipation as the non-escape probability from the discrete levels is no longer equal to one.

  12. Measurement-based reliability/performability models

    Science.gov (United States)

    Hsueh, Mei-Chen

    1987-01-01

    Measurement-based models based on real error-data collected on a multiprocessor system are described. Model development from the raw error-data to the estimation of cumulative reward is also described. A workload/reliability model is developed based on low-level error and resource usage data collected on an IBM 3081 system during its normal operation in order to evaluate the resource usage/error/recovery process in a large mainframe system. Thus, both normal and erroneous behavior of the system are modeled. The results provide an understanding of the different types of errors and recovery processes. The measured data show that the holding times in key operational and error states are not simple exponentials and that a semi-Markov process is necessary to model the system behavior. A sensitivity analysis is performed to investigate the significance of using a semi-Markov process, as opposed to a Markov process, to model the measured system.

  13. Electron-correlation based externally predictive QSARs for mutagenicity of nitrated-PAHs in Salmonella typhimurium TA100.

    Science.gov (United States)

    Reenu; Vikas

    2014-03-01

    In quantitative modeling, there are two major aspects that decide reliability and real external predictivity of a structure-activity relationship (SAR) based on quantum chemical descriptors. First, the information encoded in employed molecular descriptors, computed through a quantum-mechanical method, should be precisely estimated. The accuracy of the quantum-mechanical method, however, is dependent upon the amount of electron-correlation it incorporates. Second, the real external predictivity of a developed quantitative SAR (QSAR) should be validated employing an external prediction set. In this work, to analyze the role of electron-correlation, QSAR models are developed for a set of 51 ubiquitous pollutants, namely, nitrated monocyclic and polycyclic aromatic hydrocarbons (nitrated-AHs and PAHs) having mutagenic activity in TA100 strain of Salmonella typhimurium. The quality of the models, through state-of-the-art external validation procedures employing an external prediction set, is compared to the best models known in the literature for mutagenicity. The molecular descriptors whose electron-correlation contribution is analyzed include total energy, energy of HOMO and LUMO, and commonly employed electron-density based descriptors such as chemical hardness, chemical softness, absolute electronegativity and electrophilicity index. The electron-correlation based QSARs are also compared with those developed using quantum-mechanical descriptors computed with advanced semi-empirical (SE) methods such as PM6, PM7, RM1, and ab initio methods, namely, the Hartree-Fock (HF) and the density functional theory (DFT). The models, developed using electron-correlation contribution of the quantum-mechanical descriptors, are found to be not only reliable but also satisfactorily predictive when compared to the existing robust models. The robustness of the models based on descriptors computed through advanced SE methods, is also observed to be comparable to those developed with

  14. Simple LMFBR axial-flow friction-factor correlation

    International Nuclear Information System (INIS)

    Chan, Y.N.; Todreas, N.E.

    1981-09-01

    Complicated LMFBR axial lead-length averaged friction factor correlations are reduced to an easy, ready-to-use function of bundle Reyonlds number for wire-wrapped bundles. The function together with the power curves to calculate the associated constants are incorporated in a computer pre-processor, EZFRIC. The constants required for the calculation of the subchannels and bundle friction factors are derived and correlated into power curves of geometrical parameters. A computer program, FRIC, which can alternatively be used to accurately calculate these constants is also included. The accuracte values of the constants and the corresponding values predicted by the power curves and percentage error of prediction are tabulated for a wide variety of geometries of interest

  15. The lucky image-motion prediction for simple scene observation based soft-sensor technology

    Science.gov (United States)

    Li, Yan; Su, Yun; Hu, Bin

    2015-08-01

    High resolution is important to earth remote sensors, while the vibration of the platforms of the remote sensors is a major factor restricting high resolution imaging. The image-motion prediction and real-time compensation are key technologies to solve this problem. For the reason that the traditional autocorrelation image algorithm cannot meet the demand for the simple scene image stabilization, this paper proposes to utilize soft-sensor technology in image-motion prediction, and focus on the research of algorithm optimization in imaging image-motion prediction. Simulations results indicate that the improving lucky image-motion stabilization algorithm combining the Back Propagation Network (BP NN) and support vector machine (SVM) is the most suitable for the simple scene image stabilization. The relative error of the image-motion prediction based the soft-sensor technology is below 5%, the training computing speed of the mathematical predication model is as fast as the real-time image stabilization in aerial photography.

  16. Morphology-based Enhancement of a French SIMPLE Lexicon

    OpenAIRE

    Namer , Fiammetta; Bouillon , Pierrette; Jacquey , Evelyne; Ruimy , Nilda

    2009-01-01

    International audience; In this paper, we propose a semi-automatic methodology for acquiring a French SIMPLE lexicon based on the morphological properties of complex words. This method combines the results of the French morphological analyzer DériF with infor-mation from general lexical resources and corpora, when available. It is evaluated on a set of neolo-gisms extracted from Le Monde newspaper cor-pora.

  17. A simple procedure to model water level fluctuations in partially inundated wetlands

    NARCIS (Netherlands)

    Spieksma, JFM; Schouwenaars, JM

    When modelling groundwater behaviour in wetlands, there are specific problems related to the presence of open water in small-sized mosaic patterns. A simple quasi two-dimensional model to predict water level fluctuations in partially inundated wetlands is presented. In this model, the ratio between

  18. Correlation functions of the Ising model and the eight-vertex model

    International Nuclear Information System (INIS)

    Ko, L.F.

    1986-01-01

    Calculations for the two-point correlation functions in the scaling limit for two statistical models are presented. In Part I, the Ising model with a linear defect is studied for T T/sub c/. The transfer matrix method of Onsager and Kaufman is used. The energy-density correlation is given by functions related to the modified Bessel functions. The dispersion expansion for the spin-spin correlation functions are derived. The dominant behavior for large separations at T not equal to T/sub c/ is extracted. It is shown that these expansions lead to systems of Fredholm integral equations. In Part II, the electric correlation function of the eight-vertex model for T < T/sub c/ is studied. The eight vertex model decouples to two independent Ising models when the four spin coupling vanishes. To first order in the four-spin coupling, the electric correlation function is related to a three-point function of the Ising model. This relation is systematically investigated and the full dispersion expansion (to first order in four-spin coupling) is obtained. The results is a new kind of structure which, unlike those of many solvable models, is apparently not expressible in terms of linear integral equations

  19. Optimized theory for simple and molecular fluids.

    Science.gov (United States)

    Marucho, M; Montgomery Pettitt, B

    2007-03-28

    An optimized closure approximation for both simple and molecular fluids is presented. A smooth interpolation between Perkus-Yevick and hypernetted chain closures is optimized by minimizing the free energy self-consistently with respect to the interpolation parameter(s). The molecular version is derived from a refinement of the method for simple fluids. In doing so, a method is proposed which appropriately couples an optimized closure with the variant of the diagrammatically proper integral equation recently introduced by this laboratory [K. M. Dyer et al., J. Chem. Phys. 123, 204512 (2005)]. The simplicity of the expressions involved in this proposed theory has allowed the authors to obtain an analytic expression for the approximate excess chemical potential. This is shown to be an efficient tool to estimate, from first principles, the numerical value of the interpolation parameters defining the aforementioned closure. As a preliminary test, representative models for simple fluids and homonuclear diatomic Lennard-Jones fluids were analyzed, obtaining site-site correlation functions in excellent agreement with simulation data.

  20. The simple modelling method for storm- and grey-water quality ...

    African Journals Online (AJOL)

    The simple modelling method for storm- and grey-water quality management applied to Alexandra settlement. ... objectives optimally consist of educational programmes, erosion and sediment control, street sweeping, removal of sanitation system overflows, impervious cover reduction, downspout disconnections, removal of ...

  1. A review and development of correlations for base pressure and base heating in supersonic flow

    Energy Technology Data Exchange (ETDEWEB)

    Lamb, J.P. [Texas Univ., Austin, TX (United States). Dept. of Mechanical Engineering; Oberkampf, W.L. [Sandia National Labs., Albuquerque, NM (United States)

    1993-11-01

    A comprehensive review of experimental base pressure and base heating data related to supersonic and hypersonic flight vehicles has been completed. Particular attention was paid to free-flight data as well as wind tunnel data for models without rear sting support. Using theoretically based correlation parameters, a series of internally consistent, empirical prediction equations has been developed for planar and axisymmetric geometries (wedges, cones, and cylinders). These equations encompass the speed range from low supersonic to hypersonic flow and laminar and turbulent forebody boundary layers. A wide range of cone and wedge angles and cone bluntness ratios was included in the data base used to develop the correlations. The present investigation also included preliminary studies of the effect of angle of attack and specific-heat ratio of the gas.

  2. A simple model explaining super-resolution in absolute optical instruments

    Science.gov (United States)

    Leonhardt, Ulf; Sahebdivan, Sahar; Kogan, Alex; Tyc, Tomáš

    2015-05-01

    We develop a simple, one-dimensional model for super-resolution in absolute optical instruments that is able to describe the interplay between sources and detectors. Our model explains the subwavelength sensitivity of a point detector to a point source reported in previous computer simulations and experiments (Miñano 2011 New J. Phys.13 125009; Miñano 2014 New J. Phys.16 033015).

  3. Gaussian graphical modeling reveals specific lipid correlations in glioblastoma cells

    Science.gov (United States)

    Mueller, Nikola S.; Krumsiek, Jan; Theis, Fabian J.; Böhm, Christian; Meyer-Bäse, Anke

    2011-06-01

    Advances in high-throughput measurements of biological specimens necessitate the development of biologically driven computational techniques. To understand the molecular level of many human diseases, such as cancer, lipid quantifications have been shown to offer an excellent opportunity to reveal disease-specific regulations. The data analysis of the cell lipidome, however, remains a challenging task and cannot be accomplished solely based on intuitive reasoning. We have developed a method to identify a lipid correlation network which is entirely disease-specific. A powerful method to correlate experimentally measured lipid levels across the various samples is a Gaussian Graphical Model (GGM), which is based on partial correlation coefficients. In contrast to regular Pearson correlations, partial correlations aim to identify only direct correlations while eliminating indirect associations. Conventional GGM calculations on the entire dataset can, however, not provide information on whether a correlation is truly disease-specific with respect to the disease samples and not a correlation of control samples. Thus, we implemented a novel differential GGM approach unraveling only the disease-specific correlations, and applied it to the lipidome of immortal Glioblastoma tumor cells. A large set of lipid species were measured by mass spectrometry in order to evaluate lipid remodeling as a result to a combination of perturbation of cells inducing programmed cell death, while the other perturbations served solely as biological controls. With the differential GGM, we were able to reveal Glioblastoma-specific lipid correlations to advance biomedical research on novel gene therapies.

  4. Galactic evolution with self-regulated star formation - Stability of a simple one-zone model

    International Nuclear Information System (INIS)

    Parravano, A.; Rosenzweig, P.; Teran, M.

    1990-01-01

    In a simple one-zone model of mass exchange between three components (stars, clouds, and diffused gas), a self-regulating mechanism based on the sensitivity of the condensation of small cool clouds upon the radiation density in the 912-1100 A band is presently included. This mechanism is capable of affecting the large-scale structure of the galaxies due to the fact that it acts at a large scale in a very short time. Even in the most favorable models for the production of nonlinear oscillations, the inclusion of this mechanism of self-regulation leads, in many cases, to the progressive damping of the oscillations. 26 refs

  5. New Models for Velocity/Pressure-Gradient Correlations in Turbulent Boundary Layers

    Science.gov (United States)

    Poroseva, Svetlana; Murman, Scott

    2014-11-01

    To improve the performance of Reynolds-Averaged Navier-Stokes (RANS) turbulence models, one has to improve the accuracy of models for three physical processes: turbulent diffusion, interaction of turbulent pressure and velocity fluctuation fields, and dissipative processes. The accuracy of modeling the turbulent diffusion depends on the order of a statistical closure chosen as a basis for a RANS model. When the Gram-Charlier series expansions for the velocity correlations are used to close the set of RANS equations, no assumption on Gaussian turbulence is invoked and no unknown model coefficients are introduced into the modeled equations. In such a way, this closure procedure reduces the modeling uncertainty of fourth-order RANS (FORANS) closures. Experimental and direct numerical simulation data confirmed the validity of using the Gram-Charlier series expansions in various flows including boundary layers. We will address modeling the velocity/pressure-gradient correlations. New linear models will be introduced for the second- and higher-order correlations applicable to two-dimensional incompressible wall-bounded flows. Results of models' validation with DNS data in a channel flow and in a zero-pressure gradient boundary layer over a flat plate will be demonstrated. A part of the material is based upon work supported by NASA under award NNX12AJ61A.

  6. New simple deposition model based on reassessment of global fallout data 1954 - 1976

    Energy Technology Data Exchange (ETDEWEB)

    Palsson, S.E. [Icelandic Radiation Safety Authority, Reykjavik (Iceland); Bergan, T.D. [Directorate for Civil Protection and Emergency Planning, Toensberg (Norway); Howard, B.J. [Centre for Ecology and Hydrology, Lancaster Environment Centre, Lancaster (United Kingdom); Ikaeheimonen, T.K. [STUK - Radiation and Nuclear Safety Authority, Helsinki (Finland); Isaksson, M. [Univ. of Gothenburg. Dept. of Radiation Physics, Institute of Clinical Sciences, Sahlgren Academy, Gothenburg (Sweden); Nielsen, Sven P. [Technical Univ. of Denmark. DTU Nutech, Roskilde (Denmark); Paatero, J. [Finnish Meteorological Institute. Observation Services, Helsinki (Finland)

    2012-12-15

    Atmospheric testing of nuclear weapons began in 1945 and largely ceased in 1963. This testing is the major cause of distribution of man-made radionuclides over the globe and constitutes a background that needs to be considered when effects of other sources are estimated. The main radionuclides of long term (after the first months) concern are generally assumed to be {sup 137}Cs and {sup 90}Sr. It has been known for a long time that the deposition density of {sup 137}Cs and {sup 90}Sr is approximately proportional to the amount of precipitation. But the use of this proportional relationship raised some questions such as (a) over how large area can it be assumed that the concentration in precipitation is the same at any given time; (b) how does this agree with the observed latitude dependency of deposition density and (c) are the any other parameters that could be of use in a simple model describing global fallout? These issues were amongst those taken up in the NKS-B EcoDoses activity. The preliminary results for {sup 137}Cs and {sup 90}Sr showed for each that the measured concentration had been similar at many European and N-American sites at any given time and that the change with time had been similar. These finding were followed up in a more thorough study in this (DepEstimates) activity. Global data (including the US EML and UK AERE data sets) from 1954 - 1976 for {sup 90}Sr and {sup 137}Cs were analysed testing how well different potential explanatory variables could describe the deposition density. The best fit was obtained by not assuming the traditional proportional relationship, but instead a non-linear power function. The predictions obtained using this new model may not be significantly different from those obtained using the traditional model, when using a limited data set such as from one country as a test in this report showed. But for larger data sets and understanding of underlying processes the new model should be an improvement. (Author)

  7. Multi-agent modeling for the simulation of a simple smart microgrid

    International Nuclear Information System (INIS)

    Kremers, Enrique; Gonzalez de Durana, Jose; Barambones, Oscar

    2013-01-01

    Highlights: • We created a systemic modular model for a microgrid with a load flow calculation. • The model is modular and besides the power devices includes also a communication layer. • An agent-based approach allows to include intelligent strategies on every node of the system. • First feasibility simulations were run to show the possible outcomes of generator and load management strategies. - Abstract: The smart grid is a highly complex system that is being formed from the traditional power grid, adding new and sophisticated communication and control devices. This will enable integrating new elements for distributed power generation and also achieving an increasingly automated operation so for actions of the utilities as for customers. In order to model such systems, a bottom-up method is followed, using only a few basic elements which are structured into two layers: a physical layer for the electrical power transmission and one logical layer for element communication. A simple case study is presented to analyze the possibilities of simulation. It shows a microgrid model with dynamic load management and an integrated approach that can process both electrical and communication flows

  8. Variational Wavefunction for the Periodic Anderson Model with Onsite Correlation Factors

    Science.gov (United States)

    Kubo, Katsunori; Onishi, Hiroaki

    2017-01-01

    We propose a variational wavefunction containing parameters to tune the probabilities of all the possible onsite configurations for the periodic Anderson model. We call it the full onsite-correlation wavefunction (FOWF). This is a simple extension of the Gutzwiller wavefunction (GWF), in which one parameter is included to tune the double occupancy of the f electrons at the same site. We compare the energy of the GWF and the FOWF evaluated by the variational Monte Carlo method and that obtained with the density-matrix renormalization group method. We find that the energy is considerably improved in the FOWF. On the other hand, the physical quantities do not change significantly between these two wavefunctions as long as they describe the same phase, such as the paramagnetic phase. From these results, we not only demonstrate the improvement by the FOWF, but we also gain insights on the applicability and limitation of the GWF to the periodic Anderson model.

  9. Variational wavefunction for the periodic anderson model with onsite correlation factors

    International Nuclear Information System (INIS)

    Kubo, Katsunori; Onishi, Hiroaki

    2017-01-01

    We propose a variational wavefunction containing parameters to tune the probabilities of all the possible onsite configurations for the periodic Anderson model. We call it the full onsite-correlation wavefunction (FOWF). This is a simple extension of the Gutzwiller wavefunction (GWF), in which one parameter is included to tune the double occupancy of the f electrons at the same site. We compare the energy of the GWF and the FOWF evaluated by the variational Monte Carlo method and that obtained with the density-matrix renormalization group method. We find that the energy is considerably improved in the FOWF. On the other hand, the physical quantities do not change significantly between these two wavefunctions as long as they describe the same phase, such as the paramagnetic phase. From these results, we not only demonstrate the improvement by the FOWF, but we also gain insights on the applicability and limitation of the GWF to the periodic Anderson model. (author)

  10. RAYLEIGH SCATTERING MODELS WITH CORRELATION INTEGRAL

    Directory of Open Access Journals (Sweden)

    S. F. Kolomiets

    2014-01-01

    Full Text Available This article offers one of possible approaches to the use of the classical correlation concept in Rayleigh scattering models. Classical correlation in contrast to three types of correlations corresponding to stochastic point flows opens the door to the efficient explanation of the interaction between periodical structure of incident radiation and discreet stochastic structure of distributed scatters typical for Rayleigh problems.

  11. Correlation length estimation in a polycrystalline material model

    International Nuclear Information System (INIS)

    Simonovski, I.; Cizelj, L.

    2005-01-01

    This paper deals with the correlation length estimated from a mesoscopic model of a polycrystalline material. The correlation length can be used in some macroscopic material models as a material parameter that describes the internal length. It can be estimated directly from the strain and stress fields calculated from a finite-element model, which explicitly accounts for the selected mesoscopic features such as the random orientation, shape and size of the grains. A crystal plasticity material model was applied in the finite-element analysis. Different correlation lengths were obtained depending on the used set of crystallographic orientations. We determined that the different sets of crystallographic orientations affect the general level of the correlation length, however, as the external load is increased the behaviour of correlation length is similar in all the analyzed cases. The correlation lengths also changed with the macroscopic load. If the load is below the yield strength the correlation lengths are constant, and are slightly higher than the average grain size. The correlation length can therefore be considered as an indicator of first plastic deformations in the material. Increasing the load above the yield strength creates shear bands that temporarily increase the values of the correlation lengths calculated from the strain fields. With a further load increase the correlation lengths decrease slightly but stay above the average grain size. (author)

  12. Principles of correlation counting

    International Nuclear Information System (INIS)

    Mueller, J.W.

    1975-01-01

    A review is given of the various applications which have been made of correlation techniques in the field of nuclear physics, in particular for absolute counting. Whereas in most cases the usual coincidence method will be preferable for its simplicity, correlation counting may be the only possible approach in such cases where the two radiations of the cascade cannot be well separated or when there is a longliving intermediate state. The measurement of half-lives and of count rates of spurious pulses is also briefly discussed. The various experimental situations lead to different ways the correlation method is best applied (covariance technique with one or with two detectors, application of correlation functions, etc.). Formulae are given for some simple model cases, neglecting dead-time corrections

  13. Dynamic Chest Image Analysis: Model-Based Perfusion Analysis in Dynamic Pulmonary Imaging

    Directory of Open Access Journals (Sweden)

    Kiuru Aaro

    2003-01-01

    Full Text Available The "Dynamic Chest Image Analysis" project aims to develop model-based computer analysis and visualization methods for showing focal and general abnormalities of lung ventilation and perfusion based on a sequence of digital chest fluoroscopy frames collected with the dynamic pulmonary imaging technique. We have proposed and evaluated a multiresolutional method with an explicit ventilation model for ventilation analysis. This paper presents a new model-based method for pulmonary perfusion analysis. According to perfusion properties, we first devise a novel mathematical function to form a perfusion model. A simple yet accurate approach is further introduced to extract cardiac systolic and diastolic phases from the heart, so that this cardiac information may be utilized to accelerate the perfusion analysis and improve its sensitivity in detecting pulmonary perfusion abnormalities. This makes perfusion analysis not only fast but also robust in computation; consequently, perfusion analysis becomes computationally feasible without using contrast media. Our clinical case studies with 52 patients show that this technique is effective for pulmonary embolism even without using contrast media, demonstrating consistent correlations with computed tomography (CT and nuclear medicine (NM studies. This fluoroscopical examination takes only about 2 seconds for perfusion study with only low radiation dose to patient, involving no preparation, no radioactive isotopes, and no contrast media.

  14. A network model of correlated growth of tissue stiffening in pulmonary fibrosis

    Science.gov (United States)

    Oliveira, Cláudio L. N.; Bates, Jason H. T.; Suki, Béla

    2014-06-01

    During the progression of pulmonary fibrosis, initially isolated regions of high stiffness form and grow in the lung tissue due to collagen deposition by fibroblast cells. We have previously shown that ongoing collagen deposition may not lead to significant increases in the bulk modulus of the lung until these local remodeled regions have become sufficiently numerous and extensive to percolate in a continuous path across the entire tissue (Bates et al 2007 Am. J. Respir. Crit. Care Med. 176 617). This model, however, did not include the possibility of spatially correlated deposition of collagen. In the present study, we investigate whether spatial correlations influence the bulk modulus in a two-dimensional elastic network model of lung tissue. Random collagen deposition at a single site is modeled by increasing the elastic constant of the spring at that site by a factor of 100. By contrast, correlated collagen deposition is represented by stiffening the springs encountered along a random walk starting from some initial spring, the rationale being that excess collagen deposition is more likely in the vicinity of an already stiff region. A combination of random and correlated deposition is modeled by performing random walks of length N from randomly selected initial sites, the balance between the two processes being determined by N. We found that the dependence of bulk modulus, B(N,c), on both N and the fraction of stiff springs, c, can be described by a strikingly simple set of empirical equations. For c0.8, B(N,c) is linear in c and independent of N, such that B(N,c)=100\\;{{B}_{0}}-100{{a}_{III}}(1-c){{B}_{0}}, where {{a}_{III}}=2.857. For small concentrations, the physiologically most relevant regime, the forces in the network springs are distributed according to a power law. When c = 0.3, the exponent of this power law increases from -4.5, when N = 1, and saturates to about -2, as N increases above 40. These results suggest that the spatial correlation of

  15. Influence of pH on Drug Absorption from the Gastrointestinal Tract: A Simple Chemical Model

    Science.gov (United States)

    Hickman, Raymond J. S.; Neill, Jane

    1997-07-01

    A simple model of the gastrointestinal tract is obtained by placing ethyl acetate in contact with water at pH 2 and pH 8 in separate test tubes. The ethyl acetate corresponds to the lipid material lining the tract while the water corresponds to the aqueous contents of the stomach (pH 2) and intestine (pH 8). The compounds aspirin, paracetamol and 3-aminophenol are used as exemplars of acidic, neutral and basic drugs respectively to illustrate the influence which pH has on the distribution of each class of drug between the aqueous and organic phases of the model. The relative concentration of drug in the ethyl acetate is judged by applying microlitre-sized samples of ethyl acetate to a layer of fluorescent silica which, after evaporation of the ethyl acetate, is viewed under an ultraviolet lamp. Each of the three drugs, if present in the ethyl acetate, becomes visible as a dark spot on the silica layer. The observations made in the model system correspond well to the patterns of drug absorption from the gastrointestinal tract described in pharmacology texts and these observations are convincingly explained in terms of simple acid-base chemistry.

  16. Correlation of a hypoxia based tumor control model with observed local control rates in nasopharyngeal carcinoma treated with chemoradiotherapy

    Energy Technology Data Exchange (ETDEWEB)

    Avanzo, Michele; Stancanello, Joseph; Franchin, Giovanni; Sartor, Giovanna; Jena, Rajesh; Drigo, Annalisa; Dassie, Andrea; Gigante, Marco; Capra, Elvira [Department of Medical Physics, Centro di Riferimento Oncologico, Aviano 33081 (Italy); Research and Clinical Collaborations, Siemens Healthcare, Erlangen 91052 (Germany); Department of Radiation Oncology, Centro di Riferimento Oncologico, Aviano 33081 (Italy); Department of Medical Physics, Centro di Riferimento Oncologico, Aviano 33081 (Italy); Oncology Centre, Cambridge University Hospitals NHS Foundation Trust, Cambridge CB2 0QQ (United Kingdom); Department of Medical Physics, Centro di Riferimento Oncologico, Aviano 33081 (Italy); Department of Radiation Oncology, Centro di Riferimento Oncologico, Aviano 33081 (Italy); Department of Medical Physics, Centro di Riferimento Oncologico, Aviano 33081 (Italy)

    2010-04-15

    Purpose: To extend the application of current radiation therapy (RT) based tumor control probability (TCP) models of nasopharyngeal carcinoma (NPC) to include the effects of hypoxia and chemoradiotherapy (CRT). Methods: A TCP model is described based on the linear-quadratic model modified to account for repopulation, chemotherapy, heterogeneity of dose to the tumor, and hypoxia. Sensitivity analysis was performed to determine which parameters exert the greatest influence on the uncertainty of modeled TCP. On the basis of the sensitivity analysis, the values of specific radiobiological parameters were set to nominal values reported in the literature for NPC or head and neck tumors. The remaining radiobiological parameters were determined by fitting TCP to clinical local control data from published randomized studies using both RT and CRT. Validation of the model was performed by comparison of estimated TCP and average overall local control rate (LCR) for 45 patients treated at the institution with conventional linear-accelerator-based or helical tomotherapy based intensity-modulated RT and neoadjuvant chemotherapy. Results: Sensitivity analysis demonstrates that the model is most sensitive to the radiosensitivity term {alpha} and the dose per fraction. The estimated values of {alpha} and OER from data fitting were 0.396 Gy{sup -1} and 1.417. The model estimate of TCP (average 90.9%, range 26.9%-99.2%) showed good correlation with the LCR (86.7%). Conclusions: The model implemented in this work provides clinicians with a useful tool to predict the success rate of treatment, optimize treatment plans, and compare the effects of multimodality therapy.

  17. Correlation of a hypoxia based tumor control model with observed local control rates in nasopharyngeal carcinoma treated with chemoradiotherapy

    International Nuclear Information System (INIS)

    Avanzo, Michele; Stancanello, Joseph; Franchin, Giovanni; Sartor, Giovanna; Jena, Rajesh; Drigo, Annalisa; Dassie, Andrea; Gigante, Marco; Capra, Elvira

    2010-01-01

    Purpose: To extend the application of current radiation therapy (RT) based tumor control probability (TCP) models of nasopharyngeal carcinoma (NPC) to include the effects of hypoxia and chemoradiotherapy (CRT). Methods: A TCP model is described based on the linear-quadratic model modified to account for repopulation, chemotherapy, heterogeneity of dose to the tumor, and hypoxia. Sensitivity analysis was performed to determine which parameters exert the greatest influence on the uncertainty of modeled TCP. On the basis of the sensitivity analysis, the values of specific radiobiological parameters were set to nominal values reported in the literature for NPC or head and neck tumors. The remaining radiobiological parameters were determined by fitting TCP to clinical local control data from published randomized studies using both RT and CRT. Validation of the model was performed by comparison of estimated TCP and average overall local control rate (LCR) for 45 patients treated at the institution with conventional linear-accelerator-based or helical tomotherapy based intensity-modulated RT and neoadjuvant chemotherapy. Results: Sensitivity analysis demonstrates that the model is most sensitive to the radiosensitivity term α and the dose per fraction. The estimated values of α and OER from data fitting were 0.396 Gy -1 and 1.417. The model estimate of TCP (average 90.9%, range 26.9%-99.2%) showed good correlation with the LCR (86.7%). Conclusions: The model implemented in this work provides clinicians with a useful tool to predict the success rate of treatment, optimize treatment plans, and compare the effects of multimodality therapy.

  18. Real-Time Model-Based Leak-Through Detection within Cryogenic Flow Systems

    Science.gov (United States)

    Walker, M.; Figueroa, F.

    2015-01-01

    The timely detection of leaks within cryogenic fuel replenishment systems is of significant importance to operators on account of the safety and economic impacts associated with material loss and operational inefficiencies. Associated loss in control of pressure also effects the stability and ability to control the phase of cryogenic fluids during replenishment operations. Current research dedicated to providing Prognostics and Health Management (PHM) coverage of such cryogenic replenishment systems has focused on the detection of leaks to atmosphere involving relatively simple model-based diagnostic approaches that, while effective, are unable to isolate the fault to specific piping system components. The authors have extended this research to focus on the detection of leaks through closed valves that are intended to isolate sections of the piping system from the flow and pressurization of cryogenic fluids. The described approach employs model-based detection of leak-through conditions based on correlations of pressure changes across isolation valves and attempts to isolate the faults to specific valves. Implementation of this capability is enabled by knowledge and information embedded in the domain model of the system. The approach has been used effectively to detect such leak-through faults during cryogenic operational testing at the Cryogenic Testbed at NASA's Kennedy Space Center.

  19. A simple numerical model to estimate the effect of coal selection on pulverized fuel burnout

    Energy Technology Data Exchange (ETDEWEB)

    Sun, J.K.; Hurt, R.H.; Niksa, S.; Muzio, L.; Mehta, A.; Stallings, J. [Brown University, Providence, RI (USA). Division Engineering

    2003-06-01

    The amount of unburned carbon in ash is an important performance characteristic in commercial boilers fired with pulverized coal. Unburned carbon levels are known to be sensitive to fuel selection, and there is great interest in methods of estimating the burnout propensity of coals based on proximate and ultimate analysis - the only fuel properties readily available to utility practitioners. A simple numerical model is described that is specifically designed to estimate the effects of coal selection on burnout in a way that is useful for commercial coal screening. The model is based on a highly idealized description of the combustion chamber but employs detailed descriptions of the fundamental fuel transformations. The model is validated against data from laboratory and pilot-scale combustors burning a range of international coals, and then against data obtained from full-scale units during periods of coal switching. The validated model form is then used in a series of sensitivity studies to explore the role of various individual fuel properties that influence burnout.

  20. Adaptive Correlation Model for Visual Tracking Using Keypoints Matching and Deep Convolutional Feature

    Directory of Open Access Journals (Sweden)

    Yuankun Li

    2018-02-01

    Full Text Available Although correlation filter (CF-based visual tracking algorithms have achieved appealing results, there are still some problems to be solved. When the target object goes through long-term occlusions or scale variation, the correlation model used in existing CF-based algorithms will inevitably learn some non-target information or partial-target information. In order to avoid model contamination and enhance the adaptability of model updating, we introduce the keypoints matching strategy and adjust the model learning rate dynamically according to the matching score. Moreover, the proposed approach extracts convolutional features from a deep convolutional neural network (DCNN to accurately estimate the position and scale of the target. Experimental results demonstrate that the proposed tracker has achieved satisfactory performance in a wide range of challenging tracking scenarios.

  1. An Emotional Agent Model Based on Granular Computing

    Directory of Open Access Journals (Sweden)

    Jun Hu

    2012-01-01

    Full Text Available Affective computing has a very important significance for fulfilling intelligent information processing and harmonious communication between human being and computers. A new model for emotional agent is proposed in this paper to make agent have the ability of handling emotions, based on the granular computing theory and the traditional BDI agent model. Firstly, a new emotion knowledge base based on granular computing for emotion expression is presented in the model. Secondly, a new emotional reasoning algorithm based on granular computing is proposed. Thirdly, a new emotional agent model based on granular computing is presented. Finally, based on the model, an emotional agent for patient assistant in hospital is realized, experiment results show that it is efficient to handle simple emotions.

  2. THE ABUNDANCE OF MOLECULAR HYDROGEN AND ITS CORRELATION WITH MIDPLANE PRESSURE IN GALAXIES: NON-EQUILIBRIUM, TURBULENT, CHEMICAL MODELS

    International Nuclear Information System (INIS)

    Mac Low, Mordecai-Mark; Glover, Simon C. O.

    2012-01-01

    Observations of spiral galaxies show a strong linear correlation between the ratio of molecular to atomic hydrogen surface density R mol and midplane pressure. To explain this, we simulate three-dimensional, magnetized turbulence, including simplified treatments of non-equilibrium chemistry and the propagation of dissociating radiation, to follow the formation of H 2 from cold atomic gas. The formation timescale for H 2 is sufficiently long that equilibrium is not reached within the 20-30 Myr lifetimes of molecular clouds. The equilibrium balance between radiative dissociation and H 2 formation on dust grains fails to predict the time-dependent molecular fractions we find. A simple, time-dependent model of H 2 formation can reproduce the gross behavior, although turbulent density perturbations increase molecular fractions by a factor of few above it. In contradiction to equilibrium models, radiative dissociation of molecules plays little role in our model for diffuse radiation fields with strengths less than 10 times that of the solar neighborhood, because of the effective self-shielding of H 2 . The observed correlation of R mol with pressure corresponds to a correlation with local gas density if the effective temperature in the cold neutral medium of galactic disks is roughly constant. We indeed find such a correlation of R mol with density. If we examine the value of R mol in our local models after a free-fall time at their average density, as expected for models of molecular cloud formation by large-scale gravitational instability, our models reproduce the observed correlation over more than an order-of-magnitude range in density.

  3. The Abundance of Molecular Hydrogen and Its Correlation with Midplane Pressure in Galaxies: Non-equilibrium, Turbulent, Chemical Models

    Science.gov (United States)

    Mac Low, Mordecai-Mark; Glover, Simon C. O.

    2012-02-01

    Observations of spiral galaxies show a strong linear correlation between the ratio of molecular to atomic hydrogen surface density R mol and midplane pressure. To explain this, we simulate three-dimensional, magnetized turbulence, including simplified treatments of non-equilibrium chemistry and the propagation of dissociating radiation, to follow the formation of H2 from cold atomic gas. The formation timescale for H2 is sufficiently long that equilibrium is not reached within the 20-30 Myr lifetimes of molecular clouds. The equilibrium balance between radiative dissociation and H2 formation on dust grains fails to predict the time-dependent molecular fractions we find. A simple, time-dependent model of H2 formation can reproduce the gross behavior, although turbulent density perturbations increase molecular fractions by a factor of few above it. In contradiction to equilibrium models, radiative dissociation of molecules plays little role in our model for diffuse radiation fields with strengths less than 10 times that of the solar neighborhood, because of the effective self-shielding of H2. The observed correlation of R mol with pressure corresponds to a correlation with local gas density if the effective temperature in the cold neutral medium of galactic disks is roughly constant. We indeed find such a correlation of R mol with density. If we examine the value of R mol in our local models after a free-fall time at their average density, as expected for models of molecular cloud formation by large-scale gravitational instability, our models reproduce the observed correlation over more than an order-of-magnitude range in density.

  4. Development of a rapid, simple assay of plasma total carotenoids

    Science.gov (United States)

    2012-01-01

    Background Plasma total carotenoids can be used as an indicator of risk of chronic disease. Laboratory analysis of individual carotenoids by high performance liquid chromatography (HPLC) is time consuming, expensive, and not amenable to use beyond a research laboratory. The aim of this research is to establish a rapid, simple, and inexpensive spectrophotometric assay of plasma total carotenoids that has a very strong correlation with HPLC carotenoid profile analysis. Results Plasma total carotenoids from 29 volunteers ranged in concentration from 1.2 to 7.4 μM, as analyzed by HPLC. A linear correlation was found between the absorbance at 448 nm of an alcohol / heptane extract of the plasma and plasma total carotenoids analyzed by HPLC, with a Pearson correlation coefficient of 0.989. The average coefficient of variation for the spectrophotometric assay was 6.5% for the plasma samples. The limit of detection was about 0.3 μM and was linear up to about 34 μM without dilution. Correlations between the integrals of the absorption spectra in the range of carotenoid absorption and total plasma carotenoid concentration gave similar results to the absorbance correlation. Spectrophotometric assay results also agreed with the calculated expected absorbance based on published extinction coefficients for the individual carotenoids, with a Pearson correlation coefficient of 0.988. Conclusion The spectrophotometric assay of total carotenoids strongly correlated with HPLC analysis of carotenoids of the same plasma samples and expected absorbance values based on extinction coefficients. This rapid, simple, inexpensive assay, when coupled with the carotenoid health index, may be useful for nutrition intervention studies, population cohort studies, and public health interventions. PMID:23006902

  5. A simple parametric model observer for quality assurance in computer tomography

    Science.gov (United States)

    Anton, M.; Khanin, A.; Kretz, T.; Reginatto, M.; Elster, C.

    2018-04-01

    Model observers are mathematical classifiers that are used for the quality assessment of imaging systems such as computer tomography. The quality of the imaging system is quantified by means of the performance of a selected model observer. For binary classification tasks, the performance of the model observer is defined by the area under its ROC curve (AUC). Typically, the AUC is estimated by applying the model observer to a large set of training and test data. However, the recording of these large data sets is not always practical for routine quality assurance. In this paper we propose as an alternative a parametric model observer that is based on a simple phantom, and we provide a Bayesian estimation of its AUC. It is shown that a limited number of repeatedly recorded images (10–15) is already sufficient to obtain results suitable for the quality assessment of an imaging system. A MATLAB® function is provided for the calculation of the results. The performance of the proposed model observer is compared to that of the established channelized Hotelling observer and the nonprewhitening matched filter for simulated images as well as for images obtained from a low-contrast phantom on an x-ray tomography scanner. The results suggest that the proposed parametric model observer, along with its Bayesian treatment, can provide an efficient, practical alternative for the quality assessment of CT imaging systems.

  6. Model of geophysical fields representation in problems of complex correlation-extreme navigation

    Directory of Open Access Journals (Sweden)

    Volodymyr KHARCHENKO

    2015-09-01

    Full Text Available A model of the optimal representation of spatial data for the task of complex correlation-extreme navigation is developed based on the criterion of minimum deviation of the correlation functions of the original and the resulting fields. Calculations are presented for one-dimensional case using the approximation of the correlation function by Fourier series. It is shown that in the presence of different geophysical map data fields their representation is possible by single template with optimal sampling without distorting the form of the correlation functions.

  7. Fabricating Simple Wax Screen-Printing Paper-Based Analytical Devices to Demonstrate the Concept of Limiting Reagent in Acid- Base Reactions

    Science.gov (United States)

    Namwong, Pithakpong; Jarujamrus, Purim; Amatatongchai, Maliwan; Chairam, Sanoe

    2018-01-01

    In this article, a low-cost, simple, and rapid fabrication of paper-based analytical devices (PADs) using a wax screen-printing method is reported here. The acid-base reaction is implemented in the simple PADs to demonstrate to students the chemistry concept of a limiting reagent. When a fixed concentration of base reacts with a gradually…

  8. Noise correlations in cosmic microwave background experiments

    Science.gov (United States)

    Dodelson, Scott; Kosowsky, Arthur; Myers, Steven T.

    1995-01-01

    Many analysis of microwave background experiments neglect the correlation of noise in different frequency of polarization channels. We show that these correlations, should they be present, can lead to serve misinterpretation of an experiment. In particular, correlated noise arising from either electronics or atmosphere may mimic a cosmic signal. We quantify how the likelihood function for a given experiment varies with noise correlation, using both simple analytic models and actual data. For a typical microwave background anisotropy experiment, noise correlations at the level of 1% of the overall noise can seriously reduce the significance of a given detection.

  9. Human plasma concentrations of tolbutamide and acetaminophen extrapolated from in vivo animal pharmacokinetics using in vitro human hepatic clearances and simple physiologically based pharmacokinetic modeling for radio-labeled microdose clinical studies

    International Nuclear Information System (INIS)

    Yamazaki, Hiroshi; Kunikane, Eriko; Nishiyama, Sayako; Murayama, Norie; Shimizu, Makiko; Sugiyama, Yuichi; Chiba, Koji; Ikeda, Toshihiko

    2015-01-01

    The aim of the current study was to extrapolate the pharmacokinetics of drug substances orally administered in humans from rat pharmacokinetic data using tolbutamide and acetaminophen as model compounds. Adjusted animal biomonitoring equivalents from rat studies based on reported plasma concentrations were scaled to human biomonitoring equivalents using known species allometric scaling factors. In this extrapolation, in vitro metabolic clearance data were obtained using liver preparations. Rates of tolbutamide elimination were roughly similar in rat and human liver microsome experiments, but acetaminophen elimination by rat liver microsomes and cytosolic preparations showed a tendency to be faster than those in humans. Using a simple physiologically based pharmacokinetic (PBPK) model, estimated human plasma concentrations of tolbutamide and acetaminophen were consistent with reported concentrations. Tolbutamide cleared in a roughly similar manner in humans and rats, but medical-dose levels of acetaminophen cleared (dependent on liver metabolism) more slowly from plasma in humans than it did in rats. The data presented here illustrate how pharmacokinetic data in combination with a simple PBPK model can be used to assist evaluations of the pharmacological/toxicological potential of new drug substances and for estimating human radiation exposures from radio-labeled drugs when planning human studies. (author)

  10. Two-Way Gene Interaction From Microarray Data Based on Correlation Methods.

    Science.gov (United States)

    Alavi Majd, Hamid; Talebi, Atefeh; Gilany, Kambiz; Khayyer, Nasibeh

    2016-06-01

    Gene networks have generated a massive explosion in the development of high-throughput techniques for monitoring various aspects of gene activity. Networks offer a natural way to model interactions between genes, and extracting gene network information from high-throughput genomic data is an important and difficult task. The purpose of this study is to construct a two-way gene network based on parametric and nonparametric correlation coefficients. The first step in constructing a Gene Co-expression Network is to score all pairs of gene vectors. The second step is to select a score threshold and connect all gene pairs whose scores exceed this value. In the foundation-application study, we constructed two-way gene networks using nonparametric methods, such as Spearman's rank correlation coefficient and Blomqvist's measure, and compared them with Pearson's correlation coefficient. We surveyed six genes of venous thrombosis disease, made a matrix entry representing the score for the corresponding gene pair, and obtained two-way interactions using Pearson's correlation, Spearman's rank correlation, and Blomqvist's coefficient. Finally, these methods were compared with Cytoscape, based on BIND, and Gene Ontology, based on molecular function visual methods; R software version 3.2 and Bioconductor were used to perform these methods. Based on the Pearson and Spearman correlations, the results were the same and were confirmed by Cytoscape and GO visual methods; however, Blomqvist's coefficient was not confirmed by visual methods. Some results of the correlation coefficients are not the same with visualization. The reason may be due to the small number of data.

  11. Development of a Base Model for the New Fire PSA Training

    International Nuclear Information System (INIS)

    Kim, Kilyoo; Kang, Daeil; Kim, Wee Kyong; Do, Kyu Sik

    2013-01-01

    US NRC/EPRI issued a new fire PSA method represented by NUREG/CR 6850, and have been training many operators and inspectors to widely spread the new method. However, there is a limitation in time and efficiency for many foreigners, who generally have communication problem, to participate in the EPRI/NRC training to learn the new method. Since it is about time to introduce the new fire PSA method as a regulatory requirement for the fire protection in Korea, a simple and easy-understandable base model for the fire PSA training is required, and KAERI-KINS is jointly preparing the base model for the new fire PSA training. This paper describes how the base model is developed. Using an imaginary simple NPP, a base model of fire PSA following the new fire PSA method was developed in two ways from the internal PSA model. Since we have the base model and know the process of making the fire PSA model, the training for the new fire PSA method can be in detail performed in Korea

  12. Abnormal Event Detection in Wireless Sensor Networks Based on Multiattribute Correlation

    Directory of Open Access Journals (Sweden)

    Mengdi Wang

    2017-01-01

    Full Text Available Abnormal event detection is one of the vital tasks in wireless sensor networks. However, the faults of nodes and the poor deployment environment have brought great challenges to abnormal event detection. In a typical event detection technique, spatiotemporal correlations are collected to detect an event, which is susceptible to noises and errors. To improve the quality of detection results, we propose a novel approach for abnormal event detection in wireless sensor networks. This approach considers not only spatiotemporal correlations but also the correlations among observed attributes. A dependency model of observed attributes is constructed based on Bayesian network. In this model, the dependency structure of observed attributes is obtained by structure learning, and the conditional probability table of each node is calculated by parameter learning. We propose a new concept named attribute correlation confidence to evaluate the fitting degree between the sensor reading and the abnormal event pattern. On the basis of time correlation detection and space correlation detection, the abnormal events are identified. Experimental results show that the proposed algorithm can reduce the impact of interference factors and the rate of the false alarm effectively; it can also improve the accuracy of event detection.

  13. Pythagoras' celestial spheres in the context of a simple model for quantization of planetary orbits

    Energy Technology Data Exchange (ETDEWEB)

    Oliveira Neto, Marcal de [Instituto de Quimica, Universidade de Brasilia, Campus Universitario, Asa Norte, 70904-970 Brasilia, DF (Brazil)]. E-mail: marcal@unb.br

    2006-10-15

    In the present article we attempt to search for a correlation between Pythagoras and Kepler's ideas on harmony of the celestial spheres through simple quantization procedure to describe planetary orbits in our solar system. It is reasoned that starting from a Bohr-like atomic model, planetary mean radii and periods of revolution can be obtained from a set of small integers and just one input parameter given by the mean planetary radius of Mercury. It is also shown that the mean planetary distances can be calculated with the help of a Schroedinger-type equation considering the flatness of the solar system. An attempt to obtain planetary radii using both gravitational and electrostatic approaches linked by Newton's dimensionless constant of gravity is presented.

  14. ARIMA-Based Time Series Model of Stochastic Wind Power Generation

    DEFF Research Database (Denmark)

    Chen, Peiyuan; Pedersen, Troels; Bak-Jensen, Birgitte

    2010-01-01

    This paper proposes a stochastic wind power model based on an autoregressive integrated moving average (ARIMA) process. The model takes into account the nonstationarity and physical limits of stochastic wind power generation. The model is constructed based on wind power measurement of one year from...... the Nysted offshore wind farm in Denmark. The proposed limited-ARIMA (LARIMA) model introduces a limiter and characterizes the stochastic wind power generation by mean level, temporal correlation and driving noise. The model is validated against the measurement in terms of temporal correlation...... and probability distribution. The LARIMA model outperforms a first-order transition matrix based discrete Markov model in terms of temporal correlation, probability distribution and model parameter number. The proposed LARIMA model is further extended to include the monthly variation of the stochastic wind power...

  15. Simple area-based measurement for multidetector computed tomography to predict left ventricular size

    International Nuclear Information System (INIS)

    Schlett, Christopher L.; Kwait, Dylan C.; Mahabadi, Amir A.; Hoffmann, Udo; Bamberg, Fabian; O'Donnell, Christopher J.; Fox, Caroline S.

    2010-01-01

    Measures of left ventricular (LV) mass and dimensions are independent predictors of morbidity and mortality. We determined whether an axial area-based method by computed tomography (CT) provides an accurate estimate of LV mass and volume. A total of 45 subjects (49% female, 56.0 ± 12 years) with a wide range of LV geometry underwent contrast-enhanced 64-slice CT. LV mass and volume were derived from 3D data. 2D images were analysed to determine LV area, the direct transverse cardiac diameter (dTCD) and the cardiothoracic ratio (CTR). Furthermore, feasibility was confirmed in 100 Framingham Offspring Cohort subjects. 2D measures of LV area, dTCD and CTR were 47.3 ± 8 cm 2 , 14.7 ± 1.5 cm and 0.54 ± 0.05, respectively. 3D-derived LV volume (end-diastolic) and mass were 148.9 ± 45 cm 3 and 124.2 ± 34 g, respectively. Excellent inter- and intra-observer agreement were shown for 2D LV area measurements (both intraclass correlation coefficients (ICC) = 0.99, p 0.27). Compared with traditionally used CTR, LV size can be accurately predicted based on a simple and highly reproducible axial LV area-based measurement. (orig.)

  16. A D-vine copula-based model for repeated measurements extending linear mixed models with homogeneous correlation structure.

    Science.gov (United States)

    Killiches, Matthias; Czado, Claudia

    2018-03-22

    We propose a model for unbalanced longitudinal data, where the univariate margins can be selected arbitrarily and the dependence structure is described with the help of a D-vine copula. We show that our approach is an extremely flexible extension of the widely used linear mixed model if the correlation is homogeneous over the considered individuals. As an alternative to joint maximum-likelihood a sequential estimation approach for the D-vine copula is provided and validated in a simulation study. The model can handle missing values without being forced to discard data. Since conditional distributions are known analytically, we easily make predictions for future events. For model selection, we adjust the Bayesian information criterion to our situation. In an application to heart surgery data our model performs clearly better than competing linear mixed models. © 2018, The International Biometric Society.

  17. Response of Simple, Model Systems to Extreme Conditions

    Energy Technology Data Exchange (ETDEWEB)

    Ewing, Rodney C. [Univ. of Michigan, Ann Arbor, MI (United States); Lang, Maik [Univ. of Michigan, Ann Arbor, MI (United States)

    2015-07-30

    The focus of the research was on the application of high-pressure/high-temperature techniques, together with intense energetic ion beams, to the study of the behavior of simple oxide systems (e.g., SiO2, GeO2, CeO2, TiO2, HfO2, SnO2, ZnO and ZrO2) under extreme conditions. These simple stoichiometries provide unique model systems for the analysis of structural responses to pressure up to and above 1 Mbar, temperatures of up to several thousands of kelvin, and the extreme energy density generated by energetic heavy ions (tens of keV/atom). The investigations included systematic studies of radiation- and pressure-induced amorphization of high P-T polymorphs. By studying the response of simple stoichiometries that have multiple structural “outcomes”, we have established the basic knowledge required for the prediction of the response of more complex structures to extreme conditions. We especially focused on the amorphous state and characterized the different non-crystalline structure-types that result from the interplay of radiation and pressure. For such experiments, we made use of recent technological developments, such as the perforated diamond-anvil cell and in situ investigation using synchrotron x-ray sources. We have been particularly interested in using extreme pressures to alter the electronic structure of a solid prior to irradiation. We expected that the effects of modified band structure would be evident in the track structure and morphology, information which is much needed to describe theoretically the fundamental physics of track-formation. Finally, we investigated the behavior of different simple-oxide, composite nanomaterials (e.g., uncoated nanoparticles vs. core/shell systems) under coupled, extreme conditions. This provided insight into surface and boundary effects on phase stability under extreme conditions.

  18. Liquid-liquid critical point in a simple analytical model of water

    Science.gov (United States)

    Urbic, Tomaz

    2016-10-01

    A statistical model for a simple three-dimensional Mercedes-Benz model of water was used to study phase diagrams. This model on a simple level describes the thermal and volumetric properties of waterlike molecules. A molecule is presented as a soft sphere with four directions in which hydrogen bonds can be formed. Two neighboring waters can interact through a van der Waals interaction or an orientation-dependent hydrogen-bonding interaction. For pure water, we explored properties such as molar volume, density, heat capacity, thermal expansion coefficient, and isothermal compressibility and found that the volumetric and thermal properties follow the same trends with temperature as in real water and are in good general agreement with Monte Carlo simulations. The model exhibits also two critical points for liquid-gas transition and transition between low-density and high-density fluid. Coexistence curves and a Widom line for the maximum and minimum in thermal expansion coefficient divides the phase space of the model into three parts: in one part we have gas region, in the second a high-density liquid, and the third region contains low-density liquid.

  19. Correlations in state space can cause sub-optimal adaptation of optimal feedback control models.

    Science.gov (United States)

    Aprasoff, Jonathan; Donchin, Opher

    2012-04-01

    Control of our movements is apparently facilitated by an adaptive internal model in the cerebellum. It was long thought that this internal model implemented an adaptive inverse model and generated motor commands, but recently many reject that idea in favor of a forward model hypothesis. In theory, the forward model predicts upcoming state during reaching movements so the motor cortex can generate appropriate motor commands. Recent computational models of this process rely on the optimal feedback control (OFC) framework of control theory. OFC is a powerful tool for describing motor control, it does not describe adaptation. Some assume that adaptation of the forward model alone could explain motor adaptation, but this is widely understood to be overly simplistic. However, an adaptive optimal controller is difficult to implement. A reasonable alternative is to allow forward model adaptation to 're-tune' the controller. Our simulations show that, as expected, forward model adaptation alone does not produce optimal trajectories during reaching movements perturbed by force fields. However, they also show that re-optimizing the controller from the forward model can be sub-optimal. This is because, in a system with state correlations or redundancies, accurate prediction requires different information than optimal control. We find that adding noise to the movements that matches noise found in human data is enough to overcome this problem. However, since the state space for control of real movements is far more complex than in our simple simulations, the effects of correlations on re-adaptation of the controller from the forward model cannot be overlooked.

  20. A simple shear limited, single size, time dependent flocculation model

    Science.gov (United States)

    Kuprenas, R.; Tran, D. A.; Strom, K.

    2017-12-01

    This research focuses on the modeling of flocculation of cohesive sediment due to turbulent shear, specifically, investigating the dependency of flocculation on the concentration of cohesive sediment. Flocculation is important in larger sediment transport models as cohesive particles can create aggregates which are orders of magnitude larger than their unflocculated state. As the settling velocity of each particle is determined by the sediment size, density, and shape, accounting for this aggregation is important in determining where the sediment is deposited. This study provides a new formulation for flocculation of cohesive sediment by modifying the Winterwerp (1998) flocculation model (W98) so that it limits floc size to that of the Kolmogorov micro length scale. The W98 model is a simple approach that calculates the average floc size as a function of time. Because of its simplicity, the W98 model is ideal for implementing into larger sediment transport models; however, the model tends to over predict the dependency of the floc size on concentration. It was found that the modification of the coefficients within the original model did not allow for the model to capture the dependency on concentration. Therefore, a new term within the breakup kernel of the W98 formulation was added. The new formulation results is a single size, shear limited, and time dependent flocculation model that is able to effectively capture the dependency of the equilibrium size of flocs on both suspended sediment concentration and the time to equilibrium. The overall behavior of the new model is explored and showed align well with other studies on flocculation. Winterwerp, J. C. (1998). A simple model for turbulence induced flocculation of cohesive sediment. .Journal of Hydraulic Research, 36(3):309-326.

  1. Simple model for deriving sdg interacting boson model Hamiltonians: 150Nd example

    Science.gov (United States)

    Devi, Y. D.; Kota, V. K. B.

    1993-07-01

    A simple and yet useful model for deriving sdg interacting boson model (IBM) Hamiltonians is to assume that single-boson energies derive from identical particle (pp and nn) interactions and proton, neutron single-particle energies, and that the two-body matrix elements for bosons derive from pn interaction, with an IBM-2 to IBM-1 projection of the resulting p-n sdg IBM Hamiltonian. The applicability of this model in generating sdg IBM Hamiltonians is demonstrated, using a single-j-shell Otsuka-Arima-Iachello mapping of the quadrupole and hexadecupole operators in proton and neutron spaces separately and constructing a quadrupole-quadrupole plus hexadecupole-hexadecupole Hamiltonian in the analysis of the spectra, B(E2)'s, and E4 strength distribution in the example of 150Nd.

  2. Simple model for deriving sdg interacting boson model Hamiltonians: 150Nd example

    International Nuclear Information System (INIS)

    Devi, Y.D.; Kota, V.K.B.

    1993-01-01

    A simple and yet useful model for deriving sdg interacting boson model (IBM) Hamiltonians is to assume that single-boson energies derive from identical particle (pp and nn) interactions and proton, neutron single-particle energies, and that the two-body matrix elements for bosons derive from pn interaction, with an IBM-2 to IBM-1 projection of the resulting p-n sdg IBM Hamiltonian. The applicability of this model in generating sdg IBM Hamiltonians is demonstrated, using a single-j-shell Otsuka-Arima-Iachello mapping of the quadrupole and hexadecupole operators in proton and neutron spaces separately and constructing a quadrupole-quadrupole plus hexadecupole-hexadecupole Hamiltonian in the analysis of the spectra, B(E2)'s, and E4 strength distribution in the example of 150 Nd

  3. Microarray-based cancer prediction using soft computing approach.

    Science.gov (United States)

    Wang, Xiaosheng; Gotoh, Osamu

    2009-05-26

    One of the difficulties in using gene expression profiles to predict cancer is how to effectively select a few informative genes to construct accurate prediction models from thousands or ten thousands of genes. We screen highly discriminative genes and gene pairs to create simple prediction models involved in single genes or gene pairs on the basis of soft computing approach and rough set theory. Accurate cancerous prediction is obtained when we apply the simple prediction models for four cancerous gene expression datasets: CNS tumor, colon tumor, lung cancer and DLBCL. Some genes closely correlated with the pathogenesis of specific or general cancers are identified. In contrast with other models, our models are simple, effective and robust. Meanwhile, our models are interpretable for they are based on decision rules. Our results demonstrate that very simple models may perform well on cancerous molecular prediction and important gene markers of cancer can be detected if the gene selection approach is chosen reasonably.

  4. A Simple Model to Study Tau Pathology

    Directory of Open Access Journals (Sweden)

    Alexander L. Houck

    2016-01-01

    Full Text Available Tau proteins play a role in the stabilization of microtubules, but in pathological conditions, tauopathies, tau is modified by phosphorylation and can aggregate into aberrant aggregates. These aggregates could be toxic to cells, and different cell models have been used to test for compounds that might prevent these tau modifications. Here, we have used a cell model involving the overexpression of human tau in human embryonic kidney 293 cells. In human embryonic kidney 293 cells expressing tau in a stable manner, we have been able to replicate the phosphorylation of intracellular tau. This intracellular tau increases its own level of phosphorylation and aggregates, likely due to the regulatory effect of some growth factors on specific tau kinases such as GSK3. In these conditions, a change in secreted tau was observed. Reversal of phosphorylation and aggregation of tau was found by the use of lithium, a GSK3 inhibitor. Thus, we propose this as a simple cell model to study tau pathology in nonneuronal cells due to their viability and ease to work with.

  5. A Frank mixture copula family for modeling higher-order correlations of neural spike counts

    International Nuclear Information System (INIS)

    Onken, Arno; Obermayer, Klaus

    2009-01-01

    In order to evaluate the importance of higher-order correlations in neural spike count codes, flexible statistical models of dependent multivariate spike counts are required. Copula families, parametric multivariate distributions that represent dependencies, can be applied to construct such models. We introduce the Frank mixture family as a new copula family that has separate parameters for all pairwise and higher-order correlations. In contrast to the Farlie-Gumbel-Morgenstern copula family that shares this property, the Frank mixture copula can model strong correlations. We apply spike count models based on the Frank mixture copula to data generated by a network of leaky integrate-and-fire neurons and compare the goodness of fit to distributions based on the Farlie-Gumbel-Morgenstern family. Finally, we evaluate the importance of using proper single neuron spike count distributions on the Shannon information. We find notable deviations in the entropy that increase with decreasing firing rates. Moreover, we find that the Frank mixture family increases the log likelihood of the fit significantly compared to the Farlie-Gumbel-Morgenstern family. This shows that the Frank mixture copula is a useful tool to assess the importance of higher-order correlations in spike count codes.

  6. Ignition delay time correlation of fuel blends based on Livengood-Wu description

    KAUST Repository

    Khaled, Fathi

    2017-08-17

    In this work, a universal methodology for ignition delay time (IDT) correlation of multicomponent fuel mixtures is reported. The method is applicable over wide ranges of temperatures, pressures, and equivalence ratios. n-Heptane, iso-octane, toluene, ethanol and their blends are investigated in this study because of their relevance to gasoline surrogate formulation. The proposed methodology combines benefits from the Livengood-Wu integral, the cool flame characteristics and the Arrhenius behavior of the high-temperature ignition delay time to suggest a simple and comprehensive formulation for correlating the ignition delay times of pure components and blends. The IDTs of fuel blends usually have complex dependences on temperature, pressure, equivalence ratio and composition of the blend. The Livengood-Wu integral is applied here to relate the NTC region and the cool flame phenomenon. The integral is further extended to obtain a relation between the IDTs of fuel blends and pure components. Ignition delay times calculated using the proposed methodology are in excellent agreement with those simulated using a detailed chemical kinetic model for n-heptane, iso-octane, toluene, ethanol and blends of these components. Finally, very good agreement is also observed for combustion phasing in homogeneous charge compression ignition (HCCI) predictions between simulations performed with detailed chemistry and calculations using the developed ignition delay correlation.

  7. Correlation functions of two-matrix models

    International Nuclear Information System (INIS)

    Bonora, L.; Xiong, C.S.

    1993-11-01

    We show how to calculate correlation functions of two matrix models without any approximation technique (except for genus expansion). In particular we do not use any continuum limit technique. This allows us to find many solutions which are invisible to the latter technique. To reach our goal we make full use of the integrable hierarchies and their reductions which were shown in previous papers to naturally appear in multi-matrix models. The second ingredient we use, even though to a lesser extent, are the W-constraints. In fact an explicit solution of the relevant hierarchy, satisfying the W-constraints (string equation), underlies the explicit calculation of the correlation functions. The correlation functions we compute lend themselves to a possible interpretation in terms of topological field theories. (orig.)

  8. Effect of ventilation on concentrations of indoor radon- and thoron-progeny: Experimental verification of a simple model

    International Nuclear Information System (INIS)

    Sheets, R.W.; Thompson, C.C.

    1993-01-01

    Different models relating the dependence of radon ( 222 Rn)- and thoron ( 220 Rn)-progeny activities on room ventilation rates are presented in the literature. Some of these models predict that, as the rate of ventilation increases, activities of thoron progeny decrease more rapidly than those of radon progeny. Other models predict the opposite trend. In this study alpha activities of the radon progeny, 218 Po, 214 Pb, and 214 Bi, together with the thoron progeny 212 Pb, were measured over periods of several days in two rooms of a closed, heated house. Effective ventilation rates were calculated from measured 214 Pb/ 214 Bi ratios. A simple model in which progeny concentrations decrease by radioactive decay and by dilution with outside air has been used to calculate 212 Pb/ 214 Pb ratios as a function of ventilation rate. Calculated ratios are found to correlate significantly with experimentally-determined ratios (R 2 ∼ 0.5--0.8 at p < 0.005) confirming that, for this house, thoron progeny activities decrease faster than radon progeny activities with increasing rates of ventilation

  9. Nonlinear dynamic modeling of a simple flexible rotor system subjected to time-variable base motions

    Science.gov (United States)

    Chen, Liqiang; Wang, Jianjun; Han, Qinkai; Chu, Fulei

    2017-09-01

    Rotor systems carried in transportation system or under seismic excitations are considered to have a moving base. To study the dynamic behavior of flexible rotor systems subjected to time-variable base motions, a general model is developed based on finite element method and Lagrange's equation. Two groups of Euler angles are defined to describe the rotation of the rotor with respect to the base and that of the base with respect to the ground. It is found that the base rotations would cause nonlinearities in the model. To verify the proposed model, a novel test rig which could simulate the base angular-movement is designed. Dynamic experiments on a flexible rotor-bearing system with base angular motions are carried out. Based upon these, numerical simulations are conducted to further study the dynamic response of the flexible rotor under harmonic angular base motions. The effects of base angular amplitude, rotating speed and base frequency on response behaviors are discussed by means of FFT, waterfall, frequency response curve and orbits of the rotor. The FFT and waterfall plots of the disk horizontal and vertical vibrations are marked with multiplications of the base frequency and sum and difference tones of the rotating frequency and the base frequency. Their amplitudes will increase remarkably when they meet the whirling frequencies of the rotor system.

  10. Copula based prediction models: an application to an aortic regurgitation study

    Directory of Open Access Journals (Sweden)

    Shoukri Mohamed M

    2007-06-01

    Full Text Available Abstract Background: An important issue in prediction modeling of multivariate data is the measure of dependence structure. The use of Pearson's correlation as a dependence measure has several pitfalls and hence application of regression prediction models based on this correlation may not be an appropriate methodology. As an alternative, a copula based methodology for prediction modeling and an algorithm to simulate data are proposed. Methods: The method consists of introducing copulas as an alternative to the correlation coefficient commonly used as a measure of dependence. An algorithm based on the marginal distributions of random variables is applied to construct the Archimedean copulas. Monte Carlo simulations are carried out to replicate datasets, estimate prediction model parameters and validate them using Lin's concordance measure. Results: We have carried out a correlation-based regression analysis on data from 20 patients aged 17–82 years on pre-operative and post-operative ejection fractions after surgery and estimated the prediction model: Post-operative ejection fraction = - 0.0658 + 0.8403 (Pre-operative ejection fraction; p = 0.0008; 95% confidence interval of the slope coefficient (0.3998, 1.2808. From the exploratory data analysis, it is noted that both the pre-operative and post-operative ejection fractions measurements have slight departures from symmetry and are skewed to the left. It is also noted that the measurements tend to be widely spread and have shorter tails compared to normal distribution. Therefore predictions made from the correlation-based model corresponding to the pre-operative ejection fraction measurements in the lower range may not be accurate. Further it is found that the best approximated marginal distributions of pre-operative and post-operative ejection fractions (using q-q plots are gamma distributions. The copula based prediction model is estimated as: Post -operative ejection fraction = - 0.0933 + 0

  11. Modelling the phonotactic structure of natural language words with simple recurrent networks

    NARCIS (Netherlands)

    Stoianov, [No Value; Nerbonne, J; Bouma, H; Coppen, PA; vanHalteren, H; Teunissen, L

    1998-01-01

    Simple Recurrent Networks (SRN) are Neural Network (connectionist) models able to process natural language. Phonotactics concerns the order of symbols in words. We continued an earlier unsuccessful trial to model the phonotactics of Dutch words with SRNs. In order to overcome the previously reported

  12. Solutions of simple dual bootstrap models satisfying Lee--Veneziano relation and the smallness of cut discontinuities

    International Nuclear Information System (INIS)

    Chiu, C.B.; Hossain, M.; Tow, D.M.

    1977-07-01

    To investigate the t-dependent solutions of simple dual bootstrap models, two general formulations are discussed, one without and one with cut cancellation at the planar level. The possible corresponding production mechanisms are discussed. In contrast to Bishari's formulation, both models recover the Lee-Veneziano relation, i.e., in the peak approximation the Pomeron intercept is unity. The solutions based on an exponential form for the reduced triple-Reggeon vertex for both models are discussed in detail. Also calculated are the cut discontinuities for both models and for Bishari's and it is shown that at both the planar and cylinder levels they are small compared with the corresponding pole residues. Precocious asymptotic planarity is also found in the solutions

  13. Model Predictive Control based on Finite Impulse Response Models

    DEFF Research Database (Denmark)

    Prasath, Guru; Jørgensen, John Bagterp

    2008-01-01

    We develop a regularized l2 finite impulse response (FIR) predictive controller with input and input-rate constraints. Feedback is based on a simple constant output disturbance filter. The performance of the predictive controller in the face of plant-model mismatch is investigated by simulations...... and related to the uncertainty of the impulse response coefficients. The simulations can be used to benchmark l2 MPC against FIR based robust MPC as well as to estimate the maximum performance improvements by robust MPC....

  14. Simple adaptive sparse representation based classification schemes for EEG based brain-computer interface applications.

    Science.gov (United States)

    Shin, Younghak; Lee, Seungchan; Ahn, Minkyu; Cho, Hohyun; Jun, Sung Chan; Lee, Heung-No

    2015-11-01

    One of the main problems related to electroencephalogram (EEG) based brain-computer interface (BCI) systems is the non-stationarity of the underlying EEG signals. This results in the deterioration of the classification performance during experimental sessions. Therefore, adaptive classification techniques are required for EEG based BCI applications. In this paper, we propose simple adaptive sparse representation based classification (SRC) schemes. Supervised and unsupervised dictionary update techniques for new test data and a dictionary modification method by using the incoherence measure of the training data are investigated. The proposed methods are very simple and additional computation for the re-training of the classifier is not needed. The proposed adaptive SRC schemes are evaluated using two BCI experimental datasets. The proposed methods are assessed by comparing classification results with the conventional SRC and other adaptive classification methods. On the basis of the results, we find that the proposed adaptive schemes show relatively improved classification accuracy as compared to conventional methods without requiring additional computation. Copyright © 2015 Elsevier Ltd. All rights reserved.

  15. Simple systematization of vibrational excitation cross-section calculations for resonant electron-molecule scattering in the boomerang and impulse models.

    Science.gov (United States)

    Sarma, Manabendra; Adhikari, S; Mishra, Manoj K

    2007-01-28

    Vibrational excitation (nu(f), where psi(nu(i))(R,t) approximately =e(-iH(A(2))-(R)t/h phi(nu(i))(R) with time evolution under the influence of the resonance anionic Hamiltonian H(A(2) (-))(A(2) (-)=N(2)(-)/H(2) (-)) implemented using Lanczos and fast Fourier transforms. The target (A(2)) vibrational eigenfunctions phi(nu(i))(R) and phi(nu(f))(R) are calculated using Fourier grid Hamiltonian method applied to potential energy (PE) curves of the neutral target. Application of this simple systematization to calculate vibrational structure in e-N(2) and e-H(2) scattering cross-sections provides mechanistic insights into features underlying presence/absence of structure in e-N(2) and e-H(2) scattering cross-sections. The results obtained with approximate PE curves are in reasonable agreement with experimental/calculated cross-section profiles, and cross correlation functions provide a simple demarcation between the boomerang and impulse models.

  16. Neural Network-Based Coronary Heart Disease Risk Prediction Using Feature Correlation Analysis

    Directory of Open Access Journals (Sweden)

    Jae Kwon Kim

    2017-01-01

    Full Text Available Background. Of the machine learning techniques used in predicting coronary heart disease (CHD, neural network (NN is popularly used to improve performance accuracy. Objective. Even though NN-based systems provide meaningful results based on clinical experiments, medical experts are not satisfied with their predictive performances because NN is trained in a “black-box” style. Method. We sought to devise an NN-based prediction of CHD risk using feature correlation analysis (NN-FCA using two stages. First, the feature selection stage, which makes features acceding to the importance in predicting CHD risk, is ranked, and second, the feature correlation analysis stage, during which one learns about the existence of correlations between feature relations and the data of each NN predictor output, is determined. Result. Of the 4146 individuals in the Korean dataset evaluated, 3031 had low CHD risk and 1115 had CHD high risk. The area under the receiver operating characteristic (ROC curve of the proposed model (0.749 ± 0.010 was larger than the Framingham risk score (FRS (0.393 ± 0.010. Conclusions. The proposed NN-FCA, which utilizes feature correlation analysis, was found to be better than FRS in terms of CHD risk prediction. Furthermore, the proposed model resulted in a larger ROC curve and more accurate predictions of CHD risk in the Korean population than the FRS.

  17. Concordance-based Kendall's Correlation for Computationally-Light vs. Computationally-Heavy Centrality Metrics: Lower Bound for Correlation

    Directory of Open Access Journals (Sweden)

    Natarajan Meghanathan

    2017-01-01

    Full Text Available We identify three different levels of correlation (pair-wise relative ordering, network-wide ranking and linear regression that could be assessed between a computationally-light centrality metric and a computationally-heavy centrality metric for real-world networks. The Kendall's concordance-based correlation measure could be used to quantitatively assess how well we could consider the relative ordering of two vertices vi and vj with respect to a computationally-light centrality metric as the relative ordering of the same two vertices with respect to a computationally-heavy centrality metric. We hypothesize that the pair-wise relative ordering (concordance-based assessment of the correlation between centrality metrics is the most strictest of all the three levels of correlation and claim that the Kendall's concordance-based correlation coefficient will be lower than the correlation coefficient observed with the more relaxed levels of correlation measures (linear regression-based Pearson's product-moment correlation coefficient and the network wide ranking-based Spearman's correlation coefficient. We validate our hypothesis by evaluating the three correlation coefficients between two sets of centrality metrics: the computationally-light degree and local clustering coefficient complement-based degree centrality metrics and the computationally-heavy eigenvector centrality, betweenness centrality and closeness centrality metrics for a diverse collection of 50 real-world networks.

  18. Testing Cross-Sectional Correlation in Large Panel Data Models with Serial Correlation

    Directory of Open Access Journals (Sweden)

    Badi H. Baltagi

    2016-11-01

    Full Text Available This paper considers the problem of testing cross-sectional correlation in large panel data models with serially-correlated errors. It finds that existing tests for cross-sectional correlation encounter size distortions with serial correlation in the errors. To control the size, this paper proposes a modification of Pesaran’s Cross-sectional Dependence (CD test to account for serial correlation of an unknown form in the error term. We derive the limiting distribution of this test as N , T → ∞ . The test is distribution free and allows for unknown forms of serial correlation in the errors. Monte Carlo simulations show that the test has good size and power for large panels when serial correlation in the errors is present.

  19. Einstein-Podolsky-Rosen correlations and Bell correlations in the simplest scenario

    Science.gov (United States)

    Quan, Quan; Zhu, Huangjun; Fan, Heng; Yang, Wen-Li

    2017-06-01

    Einstein-Podolsky-Rosen (EPR) steering is an intermediate type of quantum nonlocality which sits between entanglement and Bell nonlocality. A set of correlations is Bell nonlocal if it does not admit a local hidden variable (LHV) model, while it is EPR nonlocal if it does not admit a local hidden variable-local hidden state (LHV-LHS) model. It is interesting to know what states can generate EPR-nonlocal correlations in the simplest nontrivial scenario, that is, two projective measurements for each party sharing a two-qubit state. Here we show that a two-qubit state can generate EPR-nonlocal full correlations (excluding marginal statistics) in this scenario if and only if it can generate Bell-nonlocal correlations. If full statistics (including marginal statistics) is taken into account, surprisingly, the same scenario can manifest the simplest one-way steering and the strongest hierarchy between steering and Bell nonlocality. To illustrate these intriguing phenomena in simple setups, several concrete examples are discussed in detail, which facilitates experimental demonstration. In the course of study, we introduce the concept of restricted LHS models and thereby derive a necessary and sufficient semidefinite-programming criterion to determine the steerability of any bipartite state under given measurements. Analytical criteria are further derived in several scenarios of strong theoretical and experimental interest.

  20. Restoring method for missing data of spatial structural stress monitoring based on correlation

    Science.gov (United States)

    Zhang, Zeyu; Luo, Yaozhi

    2017-07-01

    Long-term monitoring of spatial structures is of great importance for the full understanding of their performance and safety. The missing part of the monitoring data link will affect the data analysis and safety assessment of the structure. Based on the long-term monitoring data of the steel structure of the Hangzhou Olympic Center Stadium, the correlation between the stress change of the measuring points is studied, and an interpolation method of the missing stress data is proposed. Stress data of correlated measuring points are selected in the 3 months of the season when missing data is required for fitting correlation. Data of daytime and nighttime are fitted separately for interpolation. For a simple linear regression when single point's correlation coefficient is 0.9 or more, the average error of interpolation is about 5%. For multiple linear regression, the interpolation accuracy is not significantly increased after the number of correlated points is more than 6. Stress baseline value of construction step should be calculated before interpolating missing data in the construction stage, and the average error is within 10%. The interpolation error of continuous missing data is slightly larger than that of the discrete missing data. The data missing rate of this method should better not exceed 30%. Finally, a measuring point's missing monitoring data is restored to verify the validity of the method.

  1. Trophic dynamics of a simple model ecosystem.

    Science.gov (United States)

    Bell, Graham; Fortier-Dubois, Étienne

    2017-09-13

    We have constructed a model of community dynamics that is simple enough to enumerate all possible food webs, yet complex enough to represent a wide range of ecological processes. We use the transition matrix to predict the outcome of succession and then investigate how the transition probabilities are governed by resource supply and immigration. Low-input regimes lead to simple communities whereas trophically complex communities develop when there is an adequate supply of both resources and immigrants. Our interpretation of trophic dynamics in complex communities hinges on a new principle of mutual replenishment, defined as the reciprocal alternation of state in a pair of communities linked by the invasion and extinction of a shared species. Such neutral couples are the outcome of succession under local dispersal and imply that food webs will often be made up of suites of trophically equivalent species. When immigrants arrive from an external pool of fixed composition a similar principle predicts a dynamic core of webs constituting a neutral interchange network, although communities may express an extensive range of other webs whose membership is only in part predictable. The food web is not in general predictable from whole-community properties such as productivity or stability, although it may profoundly influence these properties. © 2017 The Author(s).

  2. Grotoco@SLAM: Second Language Acquisition Modeling with Simple Features, Learners and Task-wise Models

    DEFF Research Database (Denmark)

    Klerke, Sigrid; Martínez Alonso, Héctor; Plank, Barbara

    2018-01-01

    We present our submission to the 2018 Duolingo Shared Task on Second Language Acquisition Modeling (SLAM). We focus on evaluating a range of features for the task, including user-derived measures, while examining how far we can get with a simple linear classifier. Our analysis reveals that errors...

  3. Are v1 simple cells optimized for visual occlusions? A comparative study.

    Directory of Open Access Journals (Sweden)

    Jörg Bornschein

    Full Text Available Simple cells in primary visual cortex were famously found to respond to low-level image components such as edges. Sparse coding and independent component analysis (ICA emerged as the standard computational models for simple cell coding because they linked their receptive fields to the statistics of visual stimuli. However, a salient feature of image statistics, occlusions of image components, is not considered by these models. Here we ask if occlusions have an effect on the predicted shapes of simple cell receptive fields. We use a comparative approach to answer this question and investigate two models for simple cells: a standard linear model and an occlusive model. For both models we simultaneously estimate optimal receptive fields, sparsity and stimulus noise. The two models are identical except for their component superposition assumption. We find the image encoding and receptive fields predicted by the models to differ significantly. While both models predict many Gabor-like fields, the occlusive model predicts a much sparser encoding and high percentages of 'globular' receptive fields. This relatively new center-surround type of simple cell response is observed since reverse correlation is used in experimental studies. While high percentages of 'globular' fields can be obtained using specific choices of sparsity and overcompleteness in linear sparse coding, no or only low proportions are reported in the vast majority of studies on linear models (including all ICA models. Likewise, for the here investigated linear model and optimal sparsity, only low proportions of 'globular' fields are observed. In comparison, the occlusive model robustly infers high proportions and can match the experimentally observed high proportions of 'globular' fields well. Our computational study, therefore, suggests that 'globular' fields may be evidence for an optimal encoding of visual occlusions in primary visual cortex.

  4. Are v1 simple cells optimized for visual occlusions? A comparative study.

    Science.gov (United States)

    Bornschein, Jörg; Henniges, Marc; Lücke, Jörg

    2013-01-01

    Simple cells in primary visual cortex were famously found to respond to low-level image components such as edges. Sparse coding and independent component analysis (ICA) emerged as the standard computational models for simple cell coding because they linked their receptive fields to the statistics of visual stimuli. However, a salient feature of image statistics, occlusions of image components, is not considered by these models. Here we ask if occlusions have an effect on the predicted shapes of simple cell receptive fields. We use a comparative approach to answer this question and investigate two models for simple cells: a standard linear model and an occlusive model. For both models we simultaneously estimate optimal receptive fields, sparsity and stimulus noise. The two models are identical except for their component superposition assumption. We find the image encoding and receptive fields predicted by the models to differ significantly. While both models predict many Gabor-like fields, the occlusive model predicts a much sparser encoding and high percentages of 'globular' receptive fields. This relatively new center-surround type of simple cell response is observed since reverse correlation is used in experimental studies. While high percentages of 'globular' fields can be obtained using specific choices of sparsity and overcompleteness in linear sparse coding, no or only low proportions are reported in the vast majority of studies on linear models (including all ICA models). Likewise, for the here investigated linear model and optimal sparsity, only low proportions of 'globular' fields are observed. In comparison, the occlusive model robustly infers high proportions and can match the experimentally observed high proportions of 'globular' fields well. Our computational study, therefore, suggests that 'globular' fields may be evidence for an optimal encoding of visual occlusions in primary visual cortex.

  5. Time-varying coefficient vector autoregressions model based on dynamic correlation with an application to crude oil and stock markets

    International Nuclear Information System (INIS)

    Lu, Fengbin; Qiao, Han; Wang, Shouyang; Lai, Kin Keung; Li, Yuze

    2017-01-01

    This paper proposes a new time-varying coefficient vector autoregressions (VAR) model, in which the coefficient is a linear function of dynamic lagged correlation. The proposed model allows for flexibility in choices of dynamic correlation models (e.g. dynamic conditional correlation generalized autoregressive conditional heteroskedasticity (GARCH) models, Markov-switching GARCH models and multivariate stochastic volatility models), which indicates that it can describe many types of time-varying causal effects. Time-varying causal relations between West Texas Intermediate (WTI) crude oil and the US Standard and Poor’s 500 (S&P 500) stock markets are examined by the proposed model. The empirical results show that their causal relations evolve with time and display complex characters. Both positive and negative causal effects of the WTI on the S&P 500 in the subperiods have been found and confirmed by the traditional VAR models. Similar results have been obtained in the causal effects of S&P 500 on WTI. In addition, the proposed model outperforms the traditional VAR model.

  6. Time-varying coefficient vector autoregressions model based on dynamic correlation with an application to crude oil and stock markets

    Energy Technology Data Exchange (ETDEWEB)

    Lu, Fengbin, E-mail: fblu@amss.ac.cn [Academy of Mathematics and Systems Science, Chinese Academy of Sciences, Beijing 100190 (China); Qiao, Han, E-mail: qiaohan@ucas.ac.cn [School of Economics and Management, University of Chinese Academy of Sciences, Beijing 100190 (China); Wang, Shouyang, E-mail: sywang@amss.ac.cn [School of Economics and Management, University of Chinese Academy of Sciences, Beijing 100190 (China); Lai, Kin Keung, E-mail: mskklai@cityu.edu.hk [Department of Management Sciences, City University of Hong Kong (Hong Kong); Li, Yuze, E-mail: richardyz.li@mail.utoronto.ca [Department of Industrial Engineering, University of Toronto (Canada)

    2017-01-15

    This paper proposes a new time-varying coefficient vector autoregressions (VAR) model, in which the coefficient is a linear function of dynamic lagged correlation. The proposed model allows for flexibility in choices of dynamic correlation models (e.g. dynamic conditional correlation generalized autoregressive conditional heteroskedasticity (GARCH) models, Markov-switching GARCH models and multivariate stochastic volatility models), which indicates that it can describe many types of time-varying causal effects. Time-varying causal relations between West Texas Intermediate (WTI) crude oil and the US Standard and Poor’s 500 (S&P 500) stock markets are examined by the proposed model. The empirical results show that their causal relations evolve with time and display complex characters. Both positive and negative causal effects of the WTI on the S&P 500 in the subperiods have been found and confirmed by the traditional VAR models. Similar results have been obtained in the causal effects of S&P 500 on WTI. In addition, the proposed model outperforms the traditional VAR model.

  7. A Simple Mathematical Model of the Anaerobic Digestion of Wasted Fruits and Vegetables in Mesophilic Conditions

    Directory of Open Access Journals (Sweden)

    Elena Chorukova

    2015-04-01

    Full Text Available Anaerobic digestion is an effective biotechnological process for treatment of different agricultural, municipal and industrial wastes. Use of mathematical models is a powerful tool for investigations and optimisation of the anaerobic digestion. In this paper a simple mathematical model of the anaerobic digestion of wasted fruits and vegetables was developed and verified experimentally and by computer simulations using Simulink. A three-step mass-balance model was considered including the gas phase. The parameter identification was based on a set of 150 days of dynamical experiments in a laboratory bioreactor. Two step identification procedure to estimate 4 model parameters is presented. The results of 15 days of experiment in a pilot-scale bioreactor were then used to validate the model.

  8. Oscillations in a simple climate–vegetation model

    Directory of Open Access Journals (Sweden)

    J. Rombouts

    2015-05-01

    Full Text Available We formulate and analyze a simple dynamical systems model for climate–vegetation interaction. The planet we consider consists of a large ocean and a land surface on which vegetation can grow. The temperature affects vegetation growth on land and the amount of sea ice on the ocean. Conversely, vegetation and sea ice change the albedo of the planet, which in turn changes its energy balance and hence the temperature evolution. Our highly idealized, conceptual model is governed by two nonlinear, coupled ordinary differential equations, one for global temperature, the other for vegetation cover. The model exhibits either bistability between a vegetated and a desert state or oscillatory behavior. The oscillations arise through a Hopf bifurcation off the vegetated state, when the death rate of vegetation is low enough. These oscillations are anharmonic and exhibit a sawtooth shape that is characteristic of relaxation oscillations, as well as suggestive of the sharp deglaciations of the Quaternary. Our model's behavior can be compared, on the one hand, with the bistability of even simpler, Daisyworld-style climate–vegetation models. On the other hand, it can be integrated into the hierarchy of models trying to simulate and explain oscillatory behavior in the climate system. Rigorous mathematical results are obtained that link the nature of the feedbacks with the nature and the stability of the solutions. The relevance of model results to climate variability on various timescales is discussed.

  9. Oscillations in a simple climate-vegetation model

    Science.gov (United States)

    Rombouts, J.; Ghil, M.

    2015-05-01

    We formulate and analyze a simple dynamical systems model for climate-vegetation interaction. The planet we consider consists of a large ocean and a land surface on which vegetation can grow. The temperature affects vegetation growth on land and the amount of sea ice on the ocean. Conversely, vegetation and sea ice change the albedo of the planet, which in turn changes its energy balance and hence the temperature evolution. Our highly idealized, conceptual model is governed by two nonlinear, coupled ordinary differential equations, one for global temperature, the other for vegetation cover. The model exhibits either bistability between a vegetated and a desert state or oscillatory behavior. The oscillations arise through a Hopf bifurcation off the vegetated state, when the death rate of vegetation is low enough. These oscillations are anharmonic and exhibit a sawtooth shape that is characteristic of relaxation oscillations, as well as suggestive of the sharp deglaciations of the Quaternary. Our model's behavior can be compared, on the one hand, with the bistability of even simpler, Daisyworld-style climate-vegetation models. On the other hand, it can be integrated into the hierarchy of models trying to simulate and explain oscillatory behavior in the climate system. Rigorous mathematical results are obtained that link the nature of the feedbacks with the nature and the stability of the solutions. The relevance of model results to climate variability on various timescales is discussed.

  10. Closed hierarchy of correlations in Markovian open quantum systems

    International Nuclear Information System (INIS)

    Žunkovič, Bojan

    2014-01-01

    We study the Lindblad master equation in the space of operators and provide simple criteria for closeness of the hierarchy of equations for correlations. We separately consider the time evolution of closed and open systems and show that open systems satisfying the closeness conditions are not necessarily of Gaussian type. In addition, we show that dissipation can induce the closeness of the hierarchy of correlations in interacting quantum systems. As an example we study an interacting optomechanical model, the Fermi–Hubbard model, and the Rabi model, all coupled to a fine-tuned Markovian environment and obtain exact analytic expressions for the time evolution of two-point correlations. (paper)

  11. Modelling Multivariate Autoregressive Conditional Heteroskedasticity with the Double Smooth Transition Conditional Correlation GARCH Model

    DEFF Research Database (Denmark)

    Silvennoinen, Annastiina; Teräsvirta, Timo

    In this paper we propose a multivariate GARCH model with a time-varying conditional correlation structure. The new Double Smooth Transition Conditional Correlation GARCH model extends the Smooth Transition Conditional Correlation GARCH model of Silvennoinen and Ter¨asvirta (2005) by including...... another variable according to which the correlations change smoothly between states of constant correlations. A Lagrange multiplier test is derived to test the constancy of correlations against the DSTCC-GARCH model, and another one to test for another transition in the STCC-GARCH framework. In addition......, other specification tests, with the aim of aiding the model building procedure, are considered. Analytical expressions for the test statistics and the required derivatives are provided. The model is applied to a selection of world stock indices, and it is found that time is an important factor affecting...

  12. Simple biophysical model of tumor evasion from immune system control

    Science.gov (United States)

    D'Onofrio, Alberto; Ciancio, Armando

    2011-09-01

    The competitive nonlinear interplay between a tumor and the host's immune system is not only very complex but is also time-changing. A fundamental aspect of this issue is the ability of the tumor to slowly carry out processes that gradually allow it to become less harmed and less susceptible to recognition by the immune system effectors. Here we propose a simple epigenetic escape mechanism that adaptively depends on the interactions per time unit between cells of the two systems. From a biological point of view, our model is based on the concept that a tumor cell that has survived an encounter with a cytotoxic T-lymphocyte (CTL) has an information gain that it transmits to the other cells of the neoplasm. The consequence of this information increase is a decrease in both the probabilities of being killed and of being recognized by a CTL. We show that the mathematical model of this mechanism is formally equal to an evolutionary imitation game dynamics. Numerical simulations of transitory phases complement the theoretical analysis. Implications of the interplay between the above mechanisms and the delivery of immunotherapies are also illustrated.

  13. The fermion content of the Standard Model from a simple world-line theory

    Energy Technology Data Exchange (ETDEWEB)

    Mansfield, Paul, E-mail: P.R.W.Mansfield@durham.ac.uk

    2015-04-09

    We describe a simple model that automatically generates the sum over gauge group representations and chiralities of a single generation of fermions in the Standard Model, augmented by a sterile neutrino. The model is a modification of the world-line approach to chiral fermions.

  14. Practicality of Agent-Based Modeling of Civil Violence: an Assessment

    OpenAIRE

    Thron, Christopher; Jackson, Elizabeth

    2015-01-01

    Joshua Epstein (2002) proposed a simple agent-based model to describe the formation and evolution of spontaneous civil violence (such as riots or violent demonstrations). In this paper we study the practical applicability of Epstein's model.

  15. Real external predictivity of QSAR models: how to evaluate it? Comparison of different validation criteria and proposal of using the concordance correlation coefficient.

    Science.gov (United States)

    Chirico, Nicola; Gramatica, Paola

    2011-09-26

    The main utility of QSAR models is their ability to predict activities/properties for new chemicals, and this external prediction ability is evaluated by means of various validation criteria. As a measure for such evaluation the OECD guidelines have proposed the predictive squared correlation coefficient Q(2)(F1) (Shi et al.). However, other validation criteria have been proposed by other authors: the Golbraikh-Tropsha method, r(2)(m) (Roy), Q(2)(F2) (Schüürmann et al.), Q(2)(F3) (Consonni et al.). In QSAR studies these measures are usually in accordance, though this is not always the case, thus doubts can arise when contradictory results are obtained. It is likely that none of the aforementioned criteria is the best in every situation, so a comparative study using simulated data sets is proposed here, using threshold values suggested by the proponents or those widely used in QSAR modeling. In addition, a different and simple external validation measure, the concordance correlation coefficient (CCC), is proposed and compared with other criteria. Huge data sets were used to study the general behavior of validation measures, and the concordance correlation coefficient was shown to be the most restrictive. On using simulated data sets of a more realistic size, it was found that CCC was broadly in agreement, about 96% of the time, with other validation measures in accepting models as predictive, and in almost all the examples it was the most precautionary. The proposed concordance correlation coefficient also works well on real data sets, where it seems to be more stable, and helps in making decisions when the validation measures are in conflict. Since it is conceptually simple, and given its stability and restrictiveness, we propose the concordance correlation coefficient as a complementary, or alternative, more prudent measure of a QSAR model to be externally predictive.

  16. Linear models for airborne-laser-scanning-based operational forest inventory with small field sample size and highly correlated LiDAR data

    Science.gov (United States)

    Junttila, Virpi; Kauranne, Tuomo; Finley, Andrew O.; Bradford, John B.

    2015-01-01

    Modern operational forest inventory often uses remotely sensed data that cover the whole inventory area to produce spatially explicit estimates of forest properties through statistical models. The data obtained by airborne light detection and ranging (LiDAR) correlate well with many forest inventory variables, such as the tree height, the timber volume, and the biomass. To construct an accurate model over thousands of hectares, LiDAR data must be supplemented with several hundred field sample measurements of forest inventory variables. This can be costly and time consuming. Different LiDAR-data-based and spatial-data-based sampling designs can reduce the number of field sample plots needed. However, problems arising from the features of the LiDAR data, such as a large number of predictors compared with the sample size (overfitting) or a strong correlation among predictors (multicollinearity), may decrease the accuracy and precision of the estimates and predictions. To overcome these problems, a Bayesian linear model with the singular value decomposition of predictors, combined with regularization, is proposed. The model performance in predicting different forest inventory variables is verified in ten inventory areas from two continents, where the number of field sample plots is reduced using different sampling designs. The results show that, with an appropriate field plot selection strategy and the proposed linear model, the total relative error of the predicted forest inventory variables is only 5%–15% larger using 50 field sample plots than the error of a linear model estimated with several hundred field sample plots when we sum up the error due to both the model noise variance and the model’s lack of fit.

  17. Examination of hydrogen-bonding interactions between dissolved solutes and alkylbenzene solvents based on Abraham model correlations derived from measured enthalpies of solvation

    Energy Technology Data Exchange (ETDEWEB)

    Varfolomeev, Mikhail A.; Rakipov, Ilnaz T. [Chemical Institute, Kazan Federal University, Kremlevskaya 18, Kazan 420008 (Russian Federation); Acree, William E., E-mail: acree@unt.edu [Department of Chemistry, 1155 Union Circle # 305070, University of North Texas, Denton, TX 76203-5017 (United States); Brumfield, Michela [Department of Chemistry, 1155 Union Circle # 305070, University of North Texas, Denton, TX 76203-5017 (United States); Abraham, Michael H. [Department of Chemistry, University College London, 20 Gordon Street, London WC1H 0AJ (United Kingdom)

    2014-10-20

    Highlights: • Enthalpies of solution measured for 48 solutes dissolved in mesitylene. • Enthalpies of solution measured for 81 solutes dissolved in p-xylene. • Abraham model correlations derived for enthalpies of solvation of solutes in mesitylene. • Abraham model correlations derived for enthalpies of solvation of solutes in p-xylene. • Hydrogen-bonding enthalpies reported for interactions of aromatic hydrocarbons with hydrogen-bond acidic solutes. - Abstract: Enthalpies of solution at infinite dilution of 48 organic solutes in mesitylene and 81 organic solutes in p-xylene were measured using isothermal solution calorimeter. Enthalpies of solvation for 92 organic vapors and gaseous solutes in mesitylene and for 130 gaseous compounds in p-xylene were determined from the experimental and literature data. Abraham model correlations are determined from the experimental enthalpy of solvation data. The derived correlations describe the experimental gas-to-mesitylene and gas-to-p-xylene solvation enthalpies to within average standard deviations of 1.87 kJ mol{sup −1} and 2.08 kJ mol{sup −1}, respectively. Enthalpies of X-H⋯π (X-O, N, and C) hydrogen bond formation of proton donor solutes (alcohols, amines, chlorinated hydrocarbons etc.) with mesitylene and p-xylene were calculated based on the Abraham solvation equation. Obtained values are in good agreement with the results determined using conventional methods.

  18. Energy based prediction models for building acoustics

    DEFF Research Database (Denmark)

    Brunskog, Jonas

    2012-01-01

    In order to reach robust and simplified yet accurate prediction models, energy based principle are commonly used in many fields of acoustics, especially in building acoustics. This includes simple energy flow models, the framework of statistical energy analysis (SEA) as well as more elaborated...... principles as, e.g., wave intensity analysis (WIA). The European standards for building acoustic predictions, the EN 12354 series, are based on energy flow and SEA principles. In the present paper, different energy based prediction models are discussed and critically reviewed. Special attention is placed...... on underlying basic assumptions, such as diffuse fields, high modal overlap, resonant field being dominant, etc., and the consequences of these in terms of limitations in the theory and in the practical use of the models....

  19. Multi-Domain Modeling Based on Modelica

    Directory of Open Access Journals (Sweden)

    Liu Jun

    2016-01-01

    Full Text Available With the application of simulation technology in large-scale and multi-field problems, multi-domain unified modeling become an effective way to solve these problems. This paper introduces several basic methods and advantages of the multidisciplinary model, and focuses on the simulation based on Modelica language. The Modelica/Mworks is a newly developed simulation software with features of an object-oriented and non-casual language for modeling of the large, multi-domain system, which makes the model easier to grasp, develop and maintain.It This article shows the single degree of freedom mechanical vibration system based on Modelica language special connection mechanism in Mworks. This method that multi-domain modeling has simple and feasible, high reusability. it closer to the physical system, and many other advantages.

  20. Modeling and Controlling Flow Transient in Pipeline Systems: Applied for Reservoir and Pump Systems Combined with Simple Surge Tank

    Directory of Open Access Journals (Sweden)

    Itissam ABUIZIAH

    2014-03-01

    Full Text Available When transient conditions (water hammer exist, the life expectancy of the system can be adversely impacted, resulting in pump and valve failures and catastrophic pipe rupture. Hence, transient control has become an essential requirement for ensuring safe operation of water pipeline systems. To protect the pipeline systems from transient effects, an accurate analysis and suitable protection devices should be used. This paper presents the problem of modeling and simulation of transient phenomena in hydraulic systems based on the characteristics method. Also, it provides the influence of using the protection devices to control the adverse effects due to excessive and low pressure occuring in the transient. We applied this model for two main pipeline systems: Valve and pump combined with a simple surge tank connected to reservoir. The results obtained by using this model indicate that the model is an efficient tool for water hammer analysis. Moreover, using a simple surge tank reduces the unfavorable effects of transients by reducing pressure fluctuations.