WorldWideScience

Sample records for network model consisting

  1. Consistent initial conditions for the Saint-Venant equations in river network modeling

    Directory of Open Access Journals (Sweden)

    C.-W. Yu

    2017-09-01

    Full Text Available Initial conditions for flows and depths (cross-sectional areas throughout a river network are required for any time-marching (unsteady solution of the one-dimensional (1-D hydrodynamic Saint-Venant equations. For a river network modeled with several Strahler orders of tributaries, comprehensive and consistent synoptic data are typically lacking and synthetic starting conditions are needed. Because of underlying nonlinearity, poorly defined or inconsistent initial conditions can lead to convergence problems and long spin-up times in an unsteady solver. Two new approaches are defined and demonstrated herein for computing flows and cross-sectional areas (or depths. These methods can produce an initial condition data set that is consistent with modeled landscape runoff and river geometry boundary conditions at the initial time. These new methods are (1 the pseudo time-marching method (PTM that iterates toward a steady-state initial condition using an unsteady Saint-Venant solver and (2 the steady-solution method (SSM that makes use of graph theory for initial flow rates and solution of a steady-state 1-D momentum equation for the channel cross-sectional areas. The PTM is shown to be adequate for short river reaches but is significantly slower and has occasional non-convergent behavior for large river networks. The SSM approach is shown to provide a rapid solution of consistent initial conditions for both small and large networks, albeit with the requirement that additional code must be written rather than applying an existing unsteady Saint-Venant solver.

  2. Consistent robustness analysis (CRA) identifies biologically relevant properties of regulatory network models.

    Science.gov (United States)

    Saithong, Treenut; Painter, Kevin J; Millar, Andrew J

    2010-12-16

    A number of studies have previously demonstrated that "goodness of fit" is insufficient in reliably classifying the credibility of a biological model. Robustness and/or sensitivity analysis is commonly employed as a secondary method for evaluating the suitability of a particular model. The results of such analyses invariably depend on the particular parameter set tested, yet many parameter values for biological models are uncertain. Here, we propose a novel robustness analysis that aims to determine the "common robustness" of the model with multiple, biologically plausible parameter sets, rather than the local robustness for a particular parameter set. Our method is applied to two published models of the Arabidopsis circadian clock (the one-loop [1] and two-loop [2] models). The results reinforce current findings suggesting the greater reliability of the two-loop model and pinpoint the crucial role of TOC1 in the circadian network. Consistent Robustness Analysis can indicate both the relative plausibility of different models and also the critical components and processes controlling each model.

  3. Consistence of Network Filtering Rules

    Institute of Scientific and Technical Information of China (English)

    SHE Kun; WU Yuancheng; HUANG Juncai; ZHOU Mingtian

    2004-01-01

    The inconsistence of firewall/VPN(Virtual Private Network) rule makes a huge maintainable cost.With development of Multinational Company,SOHO office,E-government the number of firewalls/VPN will increase rapidly.Rule table in stand-alone or network will be increased in geometric series accordingly.Checking the consistence of rule table manually is inadequate.A formal approach can define semantic consistence,make a theoretic foundation of intelligent management about rule tables.In this paper,a kind of formalization of host rules and network ones for auto rule-validation based on SET theory were proporsed and a rule validation scheme was defined.The analysis results show the superior performance of the methods and demonstrate its potential for the intelligent management based on rule tables.

  4. Consistent model driven architecture

    Science.gov (United States)

    Niepostyn, Stanisław J.

    2015-09-01

    The goal of the MDA is to produce software systems from abstract models in a way where human interaction is restricted to a minimum. These abstract models are based on the UML language. However, the semantics of UML models is defined in a natural language. Subsequently the verification of consistency of these diagrams is needed in order to identify errors in requirements at the early stage of the development process. The verification of consistency is difficult due to a semi-formal nature of UML diagrams. We propose automatic verification of consistency of the series of UML diagrams originating from abstract models implemented with our consistency rules. This Consistent Model Driven Architecture approach enables us to generate automatically complete workflow applications from consistent and complete models developed from abstract models (e.g. Business Context Diagram). Therefore, our method can be used to check practicability (feasibility) of software architecture models.

  5. Consistency of Network Traffic Repositories: An Overview

    NARCIS (Netherlands)

    Lastdrager, E.; Lastdrager, E.E.H.; Pras, Aiko

    2009-01-01

    Traffc repositories with TCP/IP header information are very important for network analysis. Researchers often assume that such repositories reliably represent all traffc that has been flowing over the network; little thoughts are made regarding the consistency of these repositories. Still, for

  6. Consistency analysis of network traffic repositories

    NARCIS (Netherlands)

    Lastdrager, Elmer; Lastdrager, E.E.H.; Pras, Aiko

    Traffic repositories with TCP/IP header information are very important for network analysis. Researchers often assume that such repositories reliably represent all traffic that has been flowing over the network; little thoughts are made regarding the consistency of these repositories. Still, for

  7. The elastic network model reveals a consistent picture on intrinsic functional dynamics of type II restriction endonucleases

    International Nuclear Information System (INIS)

    Uyar, A; Kurkcuoglu, O; Doruker, P; Nilsson, L

    2011-01-01

    The vibrational dynamics of various type II restriction endonucleases, in complex with cognate/non-cognate DNA and in the apo form, are investigated with the elastic network model in order to reveal common functional mechanisms in this enzyme family. Scissor-like and tong-like motions observed in the slowest modes of all enzymes and their complexes point to common DNA recognition and cleavage mechanisms. Normal mode analysis further points out that the scissor-like motion has an important role in differentiating between cognate and non-cognate sequences at the recognition site, thus implying its catalytic relevance. Flexible regions observed around the DNA-binding site of the enzyme usually concentrate on the highly conserved β-strands, especially after DNA binding. These β-strands may have a structurally stabilizing role in functional dynamics for target site recognition and cleavage. In addition, hot spot residues based on high-frequency modes reveal possible communication pathways between the two distant cleavage sites in the enzyme family. Some of these hot spots also exist on the shortest path between the catalytic sites and are highly conserved

  8. Consistent ranking of volatility models

    DEFF Research Database (Denmark)

    Hansen, Peter Reinhard; Lunde, Asger

    2006-01-01

    We show that the empirical ranking of volatility models can be inconsistent for the true ranking if the evaluation is based on a proxy for the population measure of volatility. For example, the substitution of a squared return for the conditional variance in the evaluation of ARCH-type models can...... variance in out-of-sample evaluations rather than the squared return. We derive the theoretical results in a general framework that is not specific to the comparison of volatility models. Similar problems can arise in comparisons of forecasting models whenever the predicted variable is a latent variable....

  9. Consistently Trained Artificial Neural Network for Automatic Ship Berthing Control

    Directory of Open Access Journals (Sweden)

    Y.A. Ahmed

    2015-09-01

    Full Text Available In this paper, consistently trained Artificial Neural Network controller for automatic ship berthing is discussed. Minimum time course changing manoeuvre is utilised to ensure such consistency and a new concept named ‘virtual window’ is introduced. Such consistent teaching data are then used to train two separate multi-layered feed forward neural networks for command rudder and propeller revolution output. After proper training, several known and unknown conditions are tested to judge the effectiveness of the proposed controller using Monte Carlo simulations. After getting acceptable percentages of success, the trained networks are implemented for the free running experiment system to judge the network’s real time response for Esso Osaka 3-m model ship. The network’s behaviour during such experiments is also investigated for possible effect of initial conditions as well as wind disturbances. Moreover, since the final goal point of the proposed controller is set at some distance from the actual pier to ensure safety, therefore a study on automatic tug assistance is also discussed for the final alignment of the ship with actual pier.

  10. Quantifying sources of elemental carbon over the Guanzhong Basin of China: A consistent network of measurements and WRF-Chem modeling

    International Nuclear Information System (INIS)

    Li, Nan; He, Qingyang; Tie, Xuexi; Cao, Junji; Liu, Suixin; Wang, Qiyuan; Li, Guohui; Huang, Rujin; Zhang, Qiang

    2016-01-01

    We conducted a year-long WRF-Chem (Weather Research and Forecasting Chemical) model simulation of elemental carbon (EC) aerosol and compared the modeling results to the surface EC measurements in the Guanzhong (GZ) Basin of China. The main goals of this study were to quantify the individual contributions of different EC sources to EC pollution, and to find the major cause of the EC pollution in this region. The EC measurements were simultaneously conducted at 10 urban, rural, and background sites over the GZ Basin from May 2013 to April 2014, and provided a good base against which to evaluate model simulation. The model evaluation showed that the calculated annual mean EC concentration was 5.1 μgC m −3 , which was consistent with the observed value of 5.3 μgC m −3 . Moreover, the model result also reproduced the magnitude of measured EC in all seasons (regression slope = 0.98–1.03), as well as the spatial and temporal variations (r = 0.55–0.78). We conducted several sensitivity studies to quantify the individual contributions of EC sources to EC pollution. The sensitivity simulations showed that the local and outside sources contributed about 60% and 40% to the annual mean EC concentration, respectively, implying that local sources were the major EC pollution contributors in the GZ Basin. Among the local sources, residential sources contributed the most, followed by industry and transportation sources. A further analysis suggested that a 50% reduction of industry or transportation emissions only caused a 6% decrease in the annual mean EC concentration, while a 50% reduction of residential emissions reduced the winter surface EC concentration by up to 25%. In respect to the serious air pollution problems (including EC pollution) in the GZ Basin, our findings can provide an insightful view on local air pollution control strategies. - Highlights: • A yearlong WRF-Chem simulation is conducted to identify sources of the EC pollution. • A network of

  11. Decentralized Consistent Network Updates in SDN with ez-Segway

    KAUST Repository

    Nguyen, Thanh Dang; Chiesa, Marco; Canini, Marco

    2017-01-01

    We present ez-Segway, a decentralized mechanism to consistently and quickly update the network state while preventing forwarding anomalies (loops and black-holes) and avoiding link congestion. In our design, the centralized SDN controller only pre-computes

  12. Context-specific metabolic networks are consistent with experiments.

    Directory of Open Access Journals (Sweden)

    Scott A Becker

    2008-05-01

    Full Text Available Reconstructions of cellular metabolism are publicly available for a variety of different microorganisms and some mammalian genomes. To date, these reconstructions are "genome-scale" and strive to include all reactions implied by the genome annotation, as well as those with direct experimental evidence. Clearly, many of the reactions in a genome-scale reconstruction will not be active under particular conditions or in a particular cell type. Methods to tailor these comprehensive genome-scale reconstructions into context-specific networks will aid predictive in silico modeling for a particular situation. We present a method called Gene Inactivity Moderated by Metabolism and Expression (GIMME to achieve this goal. The GIMME algorithm uses quantitative gene expression data and one or more presupposed metabolic objectives to produce the context-specific reconstruction that is most consistent with the available data. Furthermore, the algorithm provides a quantitative inconsistency score indicating how consistent a set of gene expression data is with a particular metabolic objective. We show that this algorithm produces results consistent with biological experiments and intuition for adaptive evolution of bacteria, rational design of metabolic engineering strains, and human skeletal muscle cells. This work represents progress towards producing constraint-based models of metabolism that are specific to the conditions where the expression profiling data is available.

  13. Modeling and Testing Legacy Data Consistency Requirements

    DEFF Research Database (Denmark)

    Nytun, J. P.; Jensen, Christian Søndergaard

    2003-01-01

    An increasing number of data sources are available on the Internet, many of which offer semantically overlapping data, but based on different schemas, or models. While it is often of interest to integrate such data sources, the lack of consistency among them makes this integration difficult....... This paper addresses the need for new techniques that enable the modeling and consistency checking for legacy data sources. Specifically, the paper contributes to the development of a framework that enables consistency testing of data coming from different types of data sources. The vehicle is UML and its...... accompanying XMI. The paper presents techniques for modeling consistency requirements using OCL and other UML modeling elements: it studies how models that describe the required consistencies among instances of legacy models can be designed in standard UML tools that support XMI. The paper also considers...

  14. Consistency of the MLE under mixture models

    OpenAIRE

    Chen, Jiahua

    2016-01-01

    The large-sample properties of likelihood-based statistical inference under mixture models have received much attention from statisticians. Although the consistency of the nonparametric MLE is regarded as a standard conclusion, many researchers ignore the precise conditions required on the mixture model. An incorrect claim of consistency can lead to false conclusions even if the mixture model under investigation seems well behaved. Under a finite normal mixture model, for instance, the consis...

  15. Consistent spectroscopy for a extended gauge model

    International Nuclear Information System (INIS)

    Oliveira Neto, G. de.

    1990-11-01

    The consistent spectroscopy was obtained with a Lagrangian constructed with vector fields with a U(1) group extended symmetry. As consistent spectroscopy is understood the determination of quantum physical properties described by the model in an manner independent from the possible parametrizations adopted in their description. (L.C.J.A.)

  16. CONSISTENCY UNDER SAMPLING OF EXPONENTIAL RANDOM GRAPH MODELS.

    Science.gov (United States)

    Shalizi, Cosma Rohilla; Rinaldo, Alessandro

    2013-04-01

    The growing availability of network data and of scientific interest in distributed systems has led to the rapid development of statistical models of network structure. Typically, however, these are models for the entire network, while the data consists only of a sampled sub-network. Parameters for the whole network, which is what is of interest, are estimated by applying the model to the sub-network. This assumes that the model is consistent under sampling , or, in terms of the theory of stochastic processes, that it defines a projective family. Focusing on the popular class of exponential random graph models (ERGMs), we show that this apparently trivial condition is in fact violated by many popular and scientifically appealing models, and that satisfying it drastically limits ERGM's expressive power. These results are actually special cases of more general results about exponential families of dependent random variables, which we also prove. Using such results, we offer easily checked conditions for the consistency of maximum likelihood estimation in ERGMs, and discuss some possible constructive responses.

  17. Structural covariance networks across healthy young adults and their consistency.

    Science.gov (United States)

    Guo, Xiaojuan; Wang, Yan; Guo, Taomei; Chen, Kewei; Zhang, Jiacai; Li, Ke; Jin, Zhen; Yao, Li

    2015-08-01

    To investigate structural covariance networks (SCNs) as measured by regional gray matter volumes with structural magnetic resonance imaging (MRI) from healthy young adults, and to examine their consistency and stability. Two independent cohorts were included in this study: Group 1 (82 healthy subjects aged 18-28 years) and Group 2 (109 healthy subjects aged 20-28 years). Structural MRI data were acquired at 3.0T and 1.5T using a magnetization prepared rapid-acquisition gradient echo sequence for these two groups, respectively. We applied independent component analysis (ICA) to construct SCNs and further applied the spatial overlap ratio and correlation coefficient to evaluate the spatial consistency of the SCNs between these two datasets. Seven and six independent components were identified for Group 1 and Group 2, respectively. Moreover, six SCNs including the posterior default mode network, the visual and auditory networks consistently existed across the two datasets. The overlap ratios and correlation coefficients of the visual network reached the maximums of 72% and 0.71. This study demonstrates the existence of consistent SCNs corresponding to general functional networks. These structural covariance findings may provide insight into the underlying organizational principles of brain anatomy. © 2014 Wiley Periodicals, Inc.

  18. Consistent Stochastic Modelling of Meteocean Design Parameters

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard; Sterndorff, M. J.

    2000-01-01

    Consistent stochastic models of metocean design parameters and their directional dependencies are essential for reliability assessment of offshore structures. In this paper a stochastic model for the annual maximum values of the significant wave height, and the associated wind velocity, current...

  19. Consistent Estimation of Partition Markov Models

    Directory of Open Access Journals (Sweden)

    Jesús E. García

    2017-04-01

    Full Text Available The Partition Markov Model characterizes the process by a partition L of the state space, where the elements in each part of L share the same transition probability to an arbitrary element in the alphabet. This model aims to answer the following questions: what is the minimal number of parameters needed to specify a Markov chain and how to estimate these parameters. In order to answer these questions, we build a consistent strategy for model selection which consist of: giving a size n realization of the process, finding a model within the Partition Markov class, with a minimal number of parts to represent the process law. From the strategy, we derive a measure that establishes a metric in the state space. In addition, we show that if the law of the process is Markovian, then, eventually, when n goes to infinity, L will be retrieved. We show an application to model internet navigation patterns.

  20. Financial model calibration using consistency hints.

    Science.gov (United States)

    Abu-Mostafa, Y S

    2001-01-01

    We introduce a technique for forcing the calibration of a financial model to produce valid parameters. The technique is based on learning from hints. It converts simple curve fitting into genuine calibration, where broad conclusions can be inferred from parameter values. The technique augments the error function of curve fitting with consistency hint error functions based on the Kullback-Leibler distance. We introduce an efficient EM-type optimization algorithm tailored to this technique. We also introduce other consistency hints, and balance their weights using canonical errors. We calibrate the correlated multifactor Vasicek model of interest rates, and apply it successfully to Japanese Yen swaps market and US dollar yield market.

  1. Self-consistent asset pricing models

    Science.gov (United States)

    Malevergne, Y.; Sornette, D.

    2007-08-01

    We discuss the foundations of factor or regression models in the light of the self-consistency condition that the market portfolio (and more generally the risk factors) is (are) constituted of the assets whose returns it is (they are) supposed to explain. As already reported in several articles, self-consistency implies correlations between the return disturbances. As a consequence, the alphas and betas of the factor model are unobservable. Self-consistency leads to renormalized betas with zero effective alphas, which are observable with standard OLS regressions. When the conditions derived from internal consistency are not met, the model is necessarily incomplete, which means that some sources of risk cannot be replicated (or hedged) by a portfolio of stocks traded on the market, even for infinite economies. Analytical derivations and numerical simulations show that, for arbitrary choices of the proxy which are different from the true market portfolio, a modified linear regression holds with a non-zero value αi at the origin between an asset i's return and the proxy's return. Self-consistency also introduces “orthogonality” and “normality” conditions linking the betas, alphas (as well as the residuals) and the weights of the proxy portfolio. Two diagnostics based on these orthogonality and normality conditions are implemented on a basket of 323 assets which have been components of the S&P500 in the period from January 1990 to February 2005. These two diagnostics show interesting departures from dynamical self-consistency starting about 2 years before the end of the Internet bubble. Assuming that the CAPM holds with the self-consistency condition, the OLS method automatically obeys the resulting orthogonality and normality conditions and therefore provides a simple way to self-consistently assess the parameters of the model by using proxy portfolios made only of the assets which are used in the CAPM regressions. Finally, the factor decomposition with the

  2. Consistent Steering System using SCTP for Bluetooth Scatternet Sensor Network

    Science.gov (United States)

    Dhaya, R.; Sadasivam, V.; Kanthavel, R.

    2012-12-01

    Wireless communication is the best way to convey information from source to destination with flexibility and mobility and Bluetooth is the wireless technology suitable for short distance. On the other hand a wireless sensor network (WSN) consists of spatially distributed autonomous sensors to cooperatively monitor physical or environmental conditions, such as temperature, sound, vibration, pressure, motion or pollutants. Using Bluetooth piconet wireless technique in sensor nodes creates limitation in network depth and placement. The introduction of Scatternet solves the network restrictions with lack of reliability in data transmission. When the depth of the network increases, it results in more difficulties in routing. No authors so far focused on the reliability factors of Scatternet sensor network's routing. This paper illustrates the proposed system architecture and routing mechanism to increase the reliability. The another objective is to use reliable transport protocol that uses the multi-homing concept and supports multiple streams to prevent head-of-line blocking. The results show that the Scatternet sensor network has lower packet loss even in the congestive environment than the existing system suitable for all surveillance applications.

  3. Consistent biokinetic models for the actinide elements

    International Nuclear Information System (INIS)

    Leggett, R.W.

    2001-01-01

    The biokinetic models for Th, Np, Pu, Am and Cm currently recommended by the International Commission on Radiological Protection (ICRP) were developed within a generic framework that depicts gradual burial of skeletal activity in bone volume, depicts recycling of activity released to blood and links excretion to retention and translocation of activity. For other actinide elements such as Ac, Pa, Bk, Cf and Es, the ICRP still uses simplistic retention models that assign all skeletal activity to bone surface and depicts one-directional flow of activity from blood to long-term depositories to excreta. This mixture of updated and older models in ICRP documents has led to inconsistencies in dose estimates and interpretation of bioassay for radionuclides with reasonably similar biokinetics. This paper proposes new biokinetic models for Ac, Pa, Bk, Cf and Es that are consistent with the updated models for Th, Np, Pu, Am and Cm. The proposed models are developed within the ICRP's generic model framework for bone-surface-seeking radionuclides, and an effort has been made to develop parameter values that are consistent with results of comparative biokinetic data on the different actinide elements. (author)

  4. Thermodynamically consistent model calibration in chemical kinetics

    Directory of Open Access Journals (Sweden)

    Goutsias John

    2011-05-01

    Full Text Available Abstract Background The dynamics of biochemical reaction systems are constrained by the fundamental laws of thermodynamics, which impose well-defined relationships among the reaction rate constants characterizing these systems. Constructing biochemical reaction systems from experimental observations often leads to parameter values that do not satisfy the necessary thermodynamic constraints. This can result in models that are not physically realizable and may lead to inaccurate, or even erroneous, descriptions of cellular function. Results We introduce a thermodynamically consistent model calibration (TCMC method that can be effectively used to provide thermodynamically feasible values for the parameters of an open biochemical reaction system. The proposed method formulates the model calibration problem as a constrained optimization problem that takes thermodynamic constraints (and, if desired, additional non-thermodynamic constraints into account. By calculating thermodynamically feasible values for the kinetic parameters of a well-known model of the EGF/ERK signaling cascade, we demonstrate the qualitative and quantitative significance of imposing thermodynamic constraints on these parameters and the effectiveness of our method for accomplishing this important task. MATLAB software, using the Systems Biology Toolbox 2.1, can be accessed from http://www.cis.jhu.edu/~goutsias/CSS lab/software.html. An SBML file containing the thermodynamically feasible EGF/ERK signaling cascade model can be found in the BioModels database. Conclusions TCMC is a simple and flexible method for obtaining physically plausible values for the kinetic parameters of open biochemical reaction systems. It can be effectively used to recalculate a thermodynamically consistent set of parameter values for existing thermodynamically infeasible biochemical reaction models of cellular function as well as to estimate thermodynamically feasible values for the parameters of new

  5. Toward a consistent model for glass dissolution

    International Nuclear Information System (INIS)

    Strachan, D.M.; McGrail, B.P.; Bourcier, W.L.

    1994-01-01

    Understanding the process of glass dissolution in aqueous media has advanced significantly over the last 10 years through the efforts of many scientists around the world. Mathematical models describing the glass dissolution process have also advanced from simple empirical functions to structured models based on fundamental principles of physics, chemistry, and thermodynamics. Although borosilicate glass has been selected as the waste form for disposal of high-level wastes in at least 5 countries, there is no international consensus on the fundamental methodology for modeling glass dissolution that could be used in assessing the long term performance of waste glasses in a geologic repository setting. Each repository program is developing their own model and supporting experimental data. In this paper, we critically evaluate a selected set of these structured models and show that a consistent methodology for modeling glass dissolution processes is available. We also propose a strategy for a future coordinated effort to obtain the model input parameters that are needed for long-term performance assessments of glass in a geologic repository. (author) 4 figs., tabs., 75 refs

  6. Decentralized Consistent Network Updates in SDN with ez-Segway

    KAUST Repository

    Nguyen, Thanh Dang

    2017-03-06

    We present ez-Segway, a decentralized mechanism to consistently and quickly update the network state while preventing forwarding anomalies (loops and black-holes) and avoiding link congestion. In our design, the centralized SDN controller only pre-computes information needed by the switches during the update execution. This information is distributed to the switches, which use partial knowledge and direct message passing to efficiently realize the update. This separation of concerns has the key benefit of improving update performance as the communication and computation bottlenecks at the controller are removed. Our evaluations via network emulations and large-scale simulations demonstrate the efficiency of ez-Segway, which compared to a centralized approach, improves network update times by up to 45% and 57% at the median and the 99th percentile, respectively. A deployment of a system prototype in a real OpenFlow switch and an implementation in P4 demonstrate the feasibility and low overhead of implementing simple network update functionality within switches.

  7. Self-consistent model of confinement

    International Nuclear Information System (INIS)

    Swift, A.R.

    1988-01-01

    A model of the large-spatial-distance, zero--three-momentum, limit of QCD is developed from the hypothesis that there is an infrared singularity. Single quarks and gluons do not propagate because they have infinite energy after renormalization. The Hamiltonian formulation of the path integral is used to quantize QCD with physical, nonpropagating fields. Perturbation theory in the infrared limit is simplified by the absence of self-energy insertions and by the suppression of large classes of diagrams due to vanishing propagators. Remaining terms in the perturbation series are resummed to produce a set of nonlinear, renormalizable integral equations which fix both the confining interaction and the physical propagators. Solutions demonstrate the self-consistency of the concepts of an infrared singularity and nonpropagating fields. The Wilson loop is calculated to provide a general proof of confinement. Bethe-Salpeter equations for quark-antiquark pairs and for two gluons have finite-energy solutions in the color-singlet channel. The choice of gauge is addressed in detail. Large classes of corrections to the model are discussed and shown to support self-consistency

  8. Developing consistent pronunciation models for phonemic variants

    CSIR Research Space (South Africa)

    Davel, M

    2006-09-01

    Full Text Available Pronunciation lexicons often contain pronunciation variants. This can create two problems: It can be difficult to define these variants in an internally consistent way and it can also be difficult to extract generalised grapheme-to-phoneme rule sets...

  9. Collaborative networks: Reference modeling

    NARCIS (Netherlands)

    Camarinha-Matos, L.M.; Afsarmanesh, H.

    2008-01-01

    Collaborative Networks: Reference Modeling works to establish a theoretical foundation for Collaborative Networks. Particular emphasis is put on modeling multiple facets of collaborative networks and establishing a comprehensive modeling framework that captures and structures diverse perspectives of

  10. Consistent Alignment of World Embedding Models

    Science.gov (United States)

    2017-03-02

    propose a solution that aligns variations of the same model (or different models) in a joint low-dimensional la- tent space leveraging carefully...representations of linguistic enti- ties, most often referred to as embeddings. This includes techniques that rely on matrix factoriza- tion (Levy & Goldberg ...higher, the variation is much higher as well. As we increase the size of the neighborhood, or improve the quality of our sample by only picking the most

  11. Consistent sensor, relay, and link selection in wireless sensor networks

    NARCIS (Netherlands)

    Arroyo Valles, M.D.R.; Simonetto, A.; Leus, G.J.T.

    2017-01-01

    In wireless sensor networks, where energy is scarce, it is inefficient to have all nodes active because they consume a non-negligible amount of battery. In this paper we consider the problem of jointly selecting sensors, relays and links in a wireless sensor network where the active sensors need

  12. Self-consistent modelling of ICRH

    International Nuclear Information System (INIS)

    Hellsten, T.; Hedin, J.; Johnson, T.; Laxaaback, M.; Tennfors, E.

    2001-01-01

    The performance of ICRH is often sensitive to the shape of the high energy part of the distribution functions of the resonating species. This requires self-consistent calculations of the distribution functions and the wave-field. In addition to the wave-particle interactions and Coulomb collisions the effects of the finite orbit width and the RF-induced spatial transport are found to be important. The inward drift dominates in general even for a symmetric toroidal wave spectrum in the centre of the plasma. An inward drift does not necessarily produce a more peaked heating profile. On the contrary, for low concentrations of hydrogen minority in deuterium plasmas it can even give rise to broader profiles. (author)

  13. String consistency for unified model building

    International Nuclear Information System (INIS)

    Chaudhuri, S.; Chung, S.W.; Hockney, G.; Lykken, J.

    1995-01-01

    We explore the use of real fermionization as a test case for understanding how specific features of phenomenological interest in the low-energy effective superpotential are realized in exact solutions to heterotic superstring theory. We present pedagogic examples of models which realize SO(10) as a level two current algebra on the world-sheet, and discuss in general how higher level current algebras can be realized in the tensor product of simple constituent conformal field theories. We describe formal developments necessary to compute couplings in models built using real fermionization. This allows us to isolate cases of spin structures where the standard prescription for real fermionization may break down. (orig.)

  14. REPFLO model evaluation, physical and numerical consistency

    International Nuclear Information System (INIS)

    Wilson, R.N.; Holland, D.H.

    1978-11-01

    This report contains a description of some suggested changes and an evaluation of the REPFLO computer code, which models ground-water flow and nuclear-waste migration in and about a nuclear-waste repository. The discussion contained in the main body of the report is supplemented by a flow chart, presented in the Appendix of this report. The suggested changes are of four kinds: (1) technical changes to make the code compatible with a wider variety of digital computer systems; (2) changes to fill gaps in the computer code, due to missing proprietary subroutines; (3) changes to (a) correct programming errors, (b) correct logical flaws, and (c) remove unnecessary complexity; and (4) changes in the computer code logical structure to make REPFLO a more viable model from the physical point of view

  15. Consistency test of the standard model

    International Nuclear Information System (INIS)

    Pawlowski, M.; Raczka, R.

    1997-01-01

    If the 'Higgs mass' is not the physical mass of a real particle but rather an effective ultraviolet cutoff then a process energy dependence of this cutoff must be admitted. Precision data from at least two energy scale experimental points are necessary to test this hypothesis. The first set of precision data is provided by the Z-boson peak experiments. We argue that the second set can be given by 10-20 GeV e + e - colliders. We pay attention to the special role of tau polarization experiments that can be sensitive to the 'Higgs mass' for a sample of ∼ 10 8 produced tau pairs. We argue that such a study may be regarded as a negative selfconsistency test of the Standard Model and of most of its extensions

  16. For a consistent policy in the struggle against proliferation networks

    International Nuclear Information System (INIS)

    Schlumberger, Guillaume; Gruselle, Bruno

    2007-01-01

    Proliferation networks operate like companies. They must be capable of coordinating a series of elementary logistics, financial and technical functions. Due to the increase in worldwide exchanges, the reinforcement of existing export control tools alone will not be sufficient to face the increase in proliferation flows. Despite widespread reporting in the media, interdiction operations also can only have limited effect on networks, due to their occasional nature, if they are undertaken independently of an approach targeting other functions. It also seems hardly realistic to wish to neutralize a proliferation network only by freezing part of its credits in the framework of a repressive approach. Setting up an overall policy provides a means of coordinating intelligence actions, repression tools and interdiction means both nationally and internationally, and therefore appears as the only viable solution in the struggle against proliferation networks. This is a complex task for it requires the organization of inter-ministerial (or interagency) responsibilities, and in particular it requires an equilibrium between long term and short term actions. Finally, it depends on the reinforcement of links between the administrations involved and private participants including service companies, financial institutions and enterprises. (authors)

  17. Rubber elasticity for percolation network consisting of Gaussian chains

    International Nuclear Information System (INIS)

    Nishi, Kengo; Noguchi, Hiroshi; Shibayama, Mitsuhiro; Sakai, Takamasa

    2015-01-01

    A theory describing the elastic modulus for percolation networks of Gaussian chains on general lattices such as square and cubic lattices is proposed and its validity is examined with simulation and mechanical experiments on well-defined polymer networks. The theory was developed by generalizing the effective medium approximation (EMA) for Hookian spring network to Gaussian chain networks. From EMA theory, we found that the ratio of the elastic modulus at p, G to that at p = 1, G 0 , must be equal to G/G 0 = (p − 2/f)/(1 − 2/f) if the position of sites can be determined so as to meet the force balance, where p is the degree of cross-linking reaction. However, the EMA prediction cannot be applicable near its percolation threshold because EMA is a mean field theory. Thus, we combine real-space renormalization and EMA and propose a theory called real-space renormalized EMA, i.e., REMA. The elastic modulus predicted by REMA is in excellent agreement with the results of simulations and experiments of near-ideal diamond lattice gels

  18. Rubber elasticity for percolation network consisting of Gaussian chains

    Energy Technology Data Exchange (ETDEWEB)

    Nishi, Kengo, E-mail: kengo.nishi@phys.uni-goettingen.de, E-mail: sakai@tetrapod.t.u-tokyo.ac.jp, E-mail: sibayama@issp.u-tokyo.ac.jp; Noguchi, Hiroshi; Shibayama, Mitsuhiro, E-mail: kengo.nishi@phys.uni-goettingen.de, E-mail: sakai@tetrapod.t.u-tokyo.ac.jp, E-mail: sibayama@issp.u-tokyo.ac.jp [Institute for Solid State Physics, The University of Tokyo, 5-1-5 Kashiwanoha, Kashiwa, Chiba 277-8581 (Japan); Sakai, Takamasa, E-mail: kengo.nishi@phys.uni-goettingen.de, E-mail: sakai@tetrapod.t.u-tokyo.ac.jp, E-mail: sibayama@issp.u-tokyo.ac.jp [Department of Bioengineering, School of Engineering, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8656 (Japan)

    2015-11-14

    A theory describing the elastic modulus for percolation networks of Gaussian chains on general lattices such as square and cubic lattices is proposed and its validity is examined with simulation and mechanical experiments on well-defined polymer networks. The theory was developed by generalizing the effective medium approximation (EMA) for Hookian spring network to Gaussian chain networks. From EMA theory, we found that the ratio of the elastic modulus at p, G to that at p = 1, G{sub 0}, must be equal to G/G{sub 0} = (p − 2/f)/(1 − 2/f) if the position of sites can be determined so as to meet the force balance, where p is the degree of cross-linking reaction. However, the EMA prediction cannot be applicable near its percolation threshold because EMA is a mean field theory. Thus, we combine real-space renormalization and EMA and propose a theory called real-space renormalized EMA, i.e., REMA. The elastic modulus predicted by REMA is in excellent agreement with the results of simulations and experiments of near-ideal diamond lattice gels.

  19. Modeling Network Interdiction Tasks

    Science.gov (United States)

    2015-09-17

    118 xiii Table Page 36 Computation times for weighted, 100-node random networks for GAND Approach testing in Python ...in Python . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120 38 Accuracy measures for weighted, 100-node random networks for GAND...networks [15:p. 1]. A common approach to modeling network interdiction is to formulate the problem in terms of a two-stage strategic game between two

  20. A network architecture supporting consistent rich behavior in collaborative interactive applications.

    Science.gov (United States)

    Marsh, James; Glencross, Mashhuda; Pettifer, Steve; Hubbold, Roger

    2006-01-01

    Network architectures for collaborative virtual reality have traditionally been dominated by client-server and peer-to-peer approaches, with peer-to-peer strategies typically being favored where minimizing latency is a priority, and client-server where consistency is key. With increasingly sophisticated behavior models and the demand for better support for haptics, we argue that neither approach provides sufficient support for these scenarios and, thus, a hybrid architecture is required. We discuss the relative performance of different distribution strategies in the face of real network conditions and illustrate the problems they face. Finally, we present an architecture that successfully meets many of these challenges and demonstrate its use in a distributed virtual prototyping application which supports simultaneous collaboration for assembly, maintenance, and training applications utilizing haptics.

  1. Pulsed neural networks consisting of single-flux-quantum spiking neurons

    International Nuclear Information System (INIS)

    Hirose, T.; Asai, T.; Amemiya, Y.

    2007-01-01

    An inhibitory pulsed neural network was developed for brain-like information processing, by using single-flux-quantum (SFQ) circuits. It consists of spiking neuron devices that are coupled to each other through all-to-all inhibitory connections. The network selects neural activity. The operation of the neural network was confirmed by computer simulation. SFQ neuron devices can imitate the operation of the inhibition phenomenon of neural networks

  2. Modelling computer networks

    International Nuclear Information System (INIS)

    Max, G

    2011-01-01

    Traffic models in computer networks can be described as a complicated system. These systems show non-linear features and to simulate behaviours of these systems are also difficult. Before implementing network equipments users wants to know capability of their computer network. They do not want the servers to be overloaded during temporary traffic peaks when more requests arrive than the server is designed for. As a starting point for our study a non-linear system model of network traffic is established to exam behaviour of the network planned. The paper presents setting up a non-linear simulation model that helps us to observe dataflow problems of the networks. This simple model captures the relationship between the competing traffic and the input and output dataflow. In this paper, we also focus on measuring the bottleneck of the network, which was defined as the difference between the link capacity and the competing traffic volume on the link that limits end-to-end throughput. We validate the model using measurements on a working network. The results show that the initial model estimates well main behaviours and critical parameters of the network. Based on this study, we propose to develop a new algorithm, which experimentally determines and predict the available parameters of the network modelled.

  3. Modeling the citation network by network cosmology.

    Science.gov (United States)

    Xie, Zheng; Ouyang, Zhenzheng; Zhang, Pengyuan; Yi, Dongyun; Kong, Dexing

    2015-01-01

    Citation between papers can be treated as a causal relationship. In addition, some citation networks have a number of similarities to the causal networks in network cosmology, e.g., the similar in-and out-degree distributions. Hence, it is possible to model the citation network using network cosmology. The casual network models built on homogenous spacetimes have some restrictions when describing some phenomena in citation networks, e.g., the hot papers receive more citations than other simultaneously published papers. We propose an inhomogenous causal network model to model the citation network, the connection mechanism of which well expresses some features of citation. The node growth trend and degree distributions of the generated networks also fit those of some citation networks well.

  4. Brain Network Modelling

    DEFF Research Database (Denmark)

    Andersen, Kasper Winther

    Three main topics are presented in this thesis. The first and largest topic concerns network modelling of functional Magnetic Resonance Imaging (fMRI) and Diffusion Weighted Imaging (DWI). In particular nonparametric Bayesian methods are used to model brain networks derived from resting state f...... for their ability to reproduce node clustering and predict unseen data. Comparing the models on whole brain networks, BCD and IRM showed better reproducibility and predictability than IDM, suggesting that resting state networks exhibit community structure. This also points to the importance of using models, which...... allow for complex interactions between all pairs of clusters. In addition, it is demonstrated how the IRM can be used for segmenting brain structures into functionally coherent clusters. A new nonparametric Bayesian network model is presented. The model builds upon the IRM and can be used to infer...

  5. Modeling Epidemic Network Failures

    DEFF Research Database (Denmark)

    Ruepp, Sarah Renée; Fagertun, Anna Manolova

    2013-01-01

    This paper presents the implementation of a failure propagation model for transport networks when multiple failures occur resulting in an epidemic. We model the Susceptible Infected Disabled (SID) epidemic model and validate it by comparing it to analytical solutions. Furthermore, we evaluate...... the SID model’s behavior and impact on the network performance, as well as the severity of the infection spreading. The simulations are carried out in OPNET Modeler. The model provides an important input to epidemic connection recovery mechanisms, and can due to its flexibility and versatility be used...... to evaluate multiple epidemic scenarios in various network types....

  6. High-performance speech recognition using consistency modeling

    Science.gov (United States)

    Digalakis, Vassilios; Murveit, Hy; Monaco, Peter; Neumeyer, Leo; Sankar, Ananth

    1994-12-01

    The goal of SRI's consistency modeling project is to improve the raw acoustic modeling component of SRI's DECIPHER speech recognition system and develop consistency modeling technology. Consistency modeling aims to reduce the number of improper independence assumptions used in traditional speech recognition algorithms so that the resulting speech recognition hypotheses are more self-consistent and, therefore, more accurate. At the initial stages of this effort, SRI focused on developing the appropriate base technologies for consistency modeling. We first developed the Progressive Search technology that allowed us to perform large-vocabulary continuous speech recognition (LVCSR) experiments. Since its conception and development at SRI, this technique has been adopted by most laboratories, including other ARPA contracting sites, doing research on LVSR. Another goal of the consistency modeling project is to attack difficult modeling problems, when there is a mismatch between the training and testing phases. Such mismatches may include outlier speakers, different microphones and additive noise. We were able to either develop new, or transfer and evaluate existing, technologies that adapted our baseline genonic HMM recognizer to such difficult conditions.

  7. Artificial neural network modelling

    CERN Document Server

    Samarasinghe, Sandhya

    2016-01-01

    This book covers theoretical aspects as well as recent innovative applications of Artificial Neural networks (ANNs) in natural, environmental, biological, social, industrial and automated systems. It presents recent results of ANNs in modelling small, large and complex systems under three categories, namely, 1) Networks, Structure Optimisation, Robustness and Stochasticity 2) Advances in Modelling Biological and Environmental Systems and 3) Advances in Modelling Social and Economic Systems. The book aims at serving undergraduates, postgraduates and researchers in ANN computational modelling. .

  8. An evolving network model with community structure

    International Nuclear Information System (INIS)

    Li Chunguang; Maini, Philip K

    2005-01-01

    Many social and biological networks consist of communities-groups of nodes within which connections are dense, but between which connections are sparser. Recently, there has been considerable interest in designing algorithms for detecting community structures in real-world complex networks. In this paper, we propose an evolving network model which exhibits community structure. The network model is based on the inner-community preferential attachment and inter-community preferential attachment mechanisms. The degree distributions of this network model are analysed based on a mean-field method. Theoretical results and numerical simulations indicate that this network model has community structure and scale-free properties

  9. Standard Model Vacuum Stability and Weyl Consistency Conditions

    DEFF Research Database (Denmark)

    Antipin, Oleg; Gillioz, Marc; Krog, Jens

    2013-01-01

    At high energy the standard model possesses conformal symmetry at the classical level. This is reflected at the quantum level by relations between the different beta functions of the model. These relations are known as the Weyl consistency conditions. We show that it is possible to satisfy them...... order by order in perturbation theory, provided that a suitable coupling constant counting scheme is used. As a direct phenomenological application, we study the stability of the standard model vacuum at high energies and compare with previous computations violating the Weyl consistency conditions....

  10. Consistency and Reconciliation Model In Regional Development Planning

    Directory of Open Access Journals (Sweden)

    Dina Suryawati

    2016-10-01

    Full Text Available The aim of this study was to identify the problems and determine the conceptual model of regional development planning. Regional development planning is a systemic, complex and unstructured process. Therefore, this study used soft systems methodology to outline unstructured issues with a structured approach. The conceptual models that were successfully constructed in this study are a model of consistency and a model of reconciliation. Regional development planning is a process that is well-integrated with central planning and inter-regional planning documents. Integration and consistency of regional planning documents are very important in order to achieve the development goals that have been set. On the other hand, the process of development planning in the region involves technocratic system, that is, both top-down and bottom-up system of participation. Both must be balanced, do not overlap and do not dominate each other. regional, development, planning, consistency, reconciliation

  11. Modeling a Consistent Behavior of PLC-Sensors

    Directory of Open Access Journals (Sweden)

    E. V. Kuzmin

    2014-01-01

    Full Text Available The article extends the cycle of papers dedicated to programming and verificatoin of PLC-programs by LTL-specification. This approach provides the availability of correctness analysis of PLC-programs by the model checking method.The model checking method needs to construct a finite model of a PLC program. For successful verification of required properties it is important to take into consideration that not all combinations of input signals from the sensors can occur while PLC works with a control object. This fact requires more advertence to the construction of the PLC-program model.In this paper we propose to describe a consistent behavior of sensors by three groups of LTL-formulas. They will affect the program model, approximating it to the actual behavior of the PLC program. The idea of LTL-requirements is shown by an example.A PLC program is a description of reactions on input signals from sensors, switches and buttons. In constructing a PLC-program model, the approach to modeling a consistent behavior of PLC sensors allows to focus on modeling precisely these reactions without an extension of the program model by additional structures for realization of a realistic behavior of sensors. The consistent behavior of sensors is taken into account only at the stage of checking a conformity of the programming model to required properties, i. e. a property satisfaction proof for the constructed model occurs with the condition that the model contains only such executions of the program that comply with the consistent behavior of sensors.

  12. Synchronization in node of complex networks consist of complex chaotic system

    Energy Technology Data Exchange (ETDEWEB)

    Wei, Qiang, E-mail: qiangweibeihua@163.com [Beihua University computer and technology College, BeiHua University, Jilin, 132021, Jilin (China); Digital Images Processing Institute of Beihua University, BeiHua University, Jilin, 132011, Jilin (China); Faculty of Electronic Information and Electrical Engineering, Dalian University of Technology, Dalian, 116024 (China); Xie, Cheng-jun [Beihua University computer and technology College, BeiHua University, Jilin, 132021, Jilin (China); Digital Images Processing Institute of Beihua University, BeiHua University, Jilin, 132011, Jilin (China); Liu, Hong-jun [School of Information Engineering, Weifang Vocational College, Weifang, 261041 (China); Li, Yan-hui [The Library, Weifang Vocational College, Weifang, 261041 (China)

    2014-07-15

    A new synchronization method is investigated for node of complex networks consists of complex chaotic system. When complex networks realize synchronization, different component of complex state variable synchronize up to different scaling complex function by a designed complex feedback controller. This paper change synchronization scaling function from real field to complex field for synchronization in node of complex networks with complex chaotic system. Synchronization in constant delay and time-varying coupling delay complex networks are investigated, respectively. Numerical simulations are provided to show the effectiveness of the proposed method.

  13. Diagnosing a Strong-Fault Model by Conflict and Consistency.

    Science.gov (United States)

    Zhang, Wenfeng; Zhao, Qi; Zhao, Hongbo; Zhou, Gan; Feng, Wenquan

    2018-03-29

    The diagnosis method for a weak-fault model with only normal behaviors of each component has evolved over decades. However, many systems now demand a strong-fault models, the fault modes of which have specific behaviors as well. It is difficult to diagnose a strong-fault model due to its non-monotonicity. Currently, diagnosis methods usually employ conflicts to isolate possible fault and the process can be expedited when some observed output is consistent with the model's prediction where the consistency indicates probably normal components. This paper solves the problem of efficiently diagnosing a strong-fault model by proposing a novel Logic-based Truth Maintenance System (LTMS) with two search approaches based on conflict and consistency. At the beginning, the original a strong-fault model is encoded by Boolean variables and converted into Conjunctive Normal Form (CNF). Then the proposed LTMS is employed to reason over CNF and find multiple minimal conflicts and maximal consistencies when there exists fault. The search approaches offer the best candidate efficiency based on the reasoning result until the diagnosis results are obtained. The completeness, coverage, correctness and complexity of the proposals are analyzed theoretically to show their strength and weakness. Finally, the proposed approaches are demonstrated by applying them to a real-world domain-the heat control unit of a spacecraft-where the proposed methods are significantly better than best first and conflict directly with A* search methods.

  14. Consistent Partial Least Squares Path Modeling via Regularization.

    Science.gov (United States)

    Jung, Sunho; Park, JaeHong

    2018-01-01

    Partial least squares (PLS) path modeling is a component-based structural equation modeling that has been adopted in social and psychological research due to its data-analytic capability and flexibility. A recent methodological advance is consistent PLS (PLSc), designed to produce consistent estimates of path coefficients in structural models involving common factors. In practice, however, PLSc may frequently encounter multicollinearity in part because it takes a strategy of estimating path coefficients based on consistent correlations among independent latent variables. PLSc has yet no remedy for this multicollinearity problem, which can cause loss of statistical power and accuracy in parameter estimation. Thus, a ridge type of regularization is incorporated into PLSc, creating a new technique called regularized PLSc. A comprehensive simulation study is conducted to evaluate the performance of regularized PLSc as compared to its non-regularized counterpart in terms of power and accuracy. The results show that our regularized PLSc is recommended for use when serious multicollinearity is present.

  15. Consistent partnership formation: application to a sexually transmitted disease model.

    Science.gov (United States)

    Artzrouni, Marc; Deuchert, Eva

    2012-02-01

    We apply a consistent sexual partnership formation model which hinges on the assumption that one gender's choices drives the process (male or female dominant model). The other gender's behavior is imputed. The model is fitted to UK sexual behavior data and applied to a simple incidence model of HSV-2. With a male dominant model (which assumes accurate male reports on numbers of partners) the modeled incidences of HSV-2 are 77% higher for men and 50% higher for women than with a female dominant model (which assumes accurate female reports). Although highly stylized, our simple incidence model sheds light on the inconsistent results one can obtain with misreported data on sexual activity and age preferences. Copyright © 2011 Elsevier Inc. All rights reserved.

  16. Diagnosing a Strong-Fault Model by Conflict and Consistency

    Directory of Open Access Journals (Sweden)

    Wenfeng Zhang

    2018-03-01

    Full Text Available The diagnosis method for a weak-fault model with only normal behaviors of each component has evolved over decades. However, many systems now demand a strong-fault models, the fault modes of which have specific behaviors as well. It is difficult to diagnose a strong-fault model due to its non-monotonicity. Currently, diagnosis methods usually employ conflicts to isolate possible fault and the process can be expedited when some observed output is consistent with the model’s prediction where the consistency indicates probably normal components. This paper solves the problem of efficiently diagnosing a strong-fault model by proposing a novel Logic-based Truth Maintenance System (LTMS with two search approaches based on conflict and consistency. At the beginning, the original a strong-fault model is encoded by Boolean variables and converted into Conjunctive Normal Form (CNF. Then the proposed LTMS is employed to reason over CNF and find multiple minimal conflicts and maximal consistencies when there exists fault. The search approaches offer the best candidate efficiency based on the reasoning result until the diagnosis results are obtained. The completeness, coverage, correctness and complexity of the proposals are analyzed theoretically to show their strength and weakness. Finally, the proposed approaches are demonstrated by applying them to a real-world domain—the heat control unit of a spacecraft—where the proposed methods are significantly better than best first and conflict directly with A* search methods.

  17. Consistent Conformal Extensions of the Standard Model arXiv

    CERN Document Server

    Loebbert, Florian; Plefka, Jan

    The question of whether classically conformal modifications of the standard model are consistent with experimental obervations has recently been subject to renewed interest. The method of Gildener and Weinberg provides a natural framework for the study of the effective potential of the resulting multi-scalar standard model extensions. This approach relies on the assumption of the ordinary loop hierarchy $\\lambda_\\text{s} \\sim g^2_\\text{g}$ of scalar and gauge couplings. On the other hand, Andreassen, Frost and Schwartz recently argued that in the (single-scalar) standard model, gauge invariant results require the consistent scaling $\\lambda_\\text{s} \\sim g^4_\\text{g}$. In the present paper we contrast these two hierarchy assumptions and illustrate the differences in the phenomenological predictions of minimal conformal extensions of the standard model.

  18. Final Report Fermionic Symmetries and Self consistent Shell Model

    International Nuclear Information System (INIS)

    Zamick, Larry

    2008-01-01

    In this final report in the field of theoretical nuclear physics we note important accomplishments.We were confronted with 'anomoulous' magnetic moments by the experimetalists and were able to expain them. We found unexpected partial dynamical symmetries--completely unknown before, and were able to a large extent to expain them. The importance of a self consistent shell model was emphasized.

  19. Consistent three-equation model for thin films

    Science.gov (United States)

    Richard, Gael; Gisclon, Marguerite; Ruyer-Quil, Christian; Vila, Jean-Paul

    2017-11-01

    Numerical simulations of thin films of newtonian fluids down an inclined plane use reduced models for computational cost reasons. These models are usually derived by averaging over the fluid depth the physical equations of fluid mechanics with an asymptotic method in the long-wave limit. Two-equation models are based on the mass conservation equation and either on the momentum balance equation or on the work-energy theorem. We show that there is no two-equation model that is both consistent and theoretically coherent and that a third variable and a three-equation model are required to solve all theoretical contradictions. The linear and nonlinear properties of two and three-equation models are tested on various practical problems. We present a new consistent three-equation model with a simple mathematical structure which allows an easy and reliable numerical resolution. The numerical calculations agree fairly well with experimental measurements or with direct numerical resolutions for neutral stability curves, speed of kinematic waves and of solitary waves and depth profiles of wavy films. The model can also predict the flow reversal at the first capillary trough ahead of the main wave hump.

  20. Self-consistent mean-field models for nuclear structure

    International Nuclear Information System (INIS)

    Bender, Michael; Heenen, Paul-Henri; Reinhard, Paul-Gerhard

    2003-01-01

    The authors review the present status of self-consistent mean-field (SCMF) models for describing nuclear structure and low-energy dynamics. These models are presented as effective energy-density functionals. The three most widely used variants of SCMF's based on a Skyrme energy functional, a Gogny force, and a relativistic mean-field Lagrangian are considered side by side. The crucial role of the treatment of pairing correlations is pointed out in each case. The authors discuss other related nuclear structure models and present several extensions beyond the mean-field model which are currently used. Phenomenological adjustment of the model parameters is discussed in detail. The performance quality of the SCMF model is demonstrated for a broad range of typical applications

  1. Detection and quantification of flow consistency in business process models

    DEFF Research Database (Denmark)

    Burattin, Andrea; Bernstein, Vered; Neurauter, Manuel

    2017-01-01

    , to show how such features can be quantified into computational metrics, which are applicable to business process models. We focus on one particular feature, consistency of flow direction, and show the challenges that arise when transforming it into a precise metric. We propose three different metrics......Business process models abstract complex business processes by representing them as graphical models. Their layout, as determined by the modeler, may have an effect when these models are used. However, this effect is currently not fully understood. In order to systematically study this effect......, a basic set of measurable key visual features is proposed, depicting the layout properties that are meaningful to the human user. The aim of this research is thus twofold: first, to empirically identify key visual features of business process models which are perceived as meaningful to the user and second...

  2. A consistent transported PDF model for treating differential molecular diffusion

    Science.gov (United States)

    Wang, Haifeng; Zhang, Pei

    2016-11-01

    Differential molecular diffusion is a fundamentally significant phenomenon in all multi-component turbulent reacting or non-reacting flows caused by the different rates of molecular diffusion of energy and species concentrations. In the transported probability density function (PDF) method, the differential molecular diffusion can be treated by using a mean drift model developed by McDermott and Pope. This model correctly accounts for the differential molecular diffusion in the scalar mean transport and yields a correct DNS limit of the scalar variance production. The model, however, misses the molecular diffusion term in the scalar variance transport equation, which yields an inconsistent prediction of the scalar variance in the transported PDF method. In this work, a new model is introduced to remedy this problem that can yield a consistent scalar variance prediction. The model formulation along with its numerical implementation is discussed, and the model validation is conducted in a turbulent mixing layer problem.

  3. Consistency checks in beam emission modeling for neutral beam injectors

    International Nuclear Information System (INIS)

    Punyapu, Bharathi; Vattipalle, Prahlad; Sharma, Sanjeev Kumar; Baruah, Ujjwal Kumar; Crowley, Brendan

    2015-01-01

    In positive neutral beam systems, the beam parameters such as ion species fractions, power fractions and beam divergence are routinely measured using Doppler shifted beam emission spectrum. The accuracy with which these parameters are estimated depend on the accuracy of the atomic modeling involved in these estimations. In this work, an effective procedure to check the consistency of the beam emission modeling in neutral beam injectors is proposed. As a first consistency check, at a constant beam voltage and current, the intensity of the beam emission spectrum is measured by varying the pressure in the neutralizer. Then, the scaling of measured intensity of un-shifted (target) and Doppler shifted intensities (projectile) of the beam emission spectrum at these pressure values are studied. If the un-shifted component scales with pressure, then the intensity of this component will be used as a second consistency check on the beam emission modeling. As a further check, the modeled beam fractions and emission cross sections of projectile and target are used to predict the intensity of the un-shifted component and then compared with the value of measured target intensity. An agreement between the predicted and measured target intensities provide the degree of discrepancy in the beam emission modeling. In order to test this methodology, a systematic analysis of Doppler shift spectroscopy data obtained on the JET neutral beam test stand data was carried out

  4. Simplified models for dark matter face their consistent completions

    Energy Technology Data Exchange (ETDEWEB)

    Gonçalves, Dorival; Machado, Pedro A. N.; No, Jose Miguel

    2017-03-01

    Simplified dark matter models have been recently advocated as a powerful tool to exploit the complementarity between dark matter direct detection, indirect detection and LHC experimental probes. Focusing on pseudoscalar mediators between the dark and visible sectors, we show that the simplified dark matter model phenomenology departs significantly from that of consistent ${SU(2)_{\\mathrm{L}} \\times U(1)_{\\mathrm{Y}}}$ gauge invariant completions. We discuss the key physics simplified models fail to capture, and its impact on LHC searches. Notably, we show that resonant mono-Z searches provide competitive sensitivities to standard mono-jet analyses at $13$ TeV LHC.

  5. Connectome-scale group-wise consistent resting-state network analysis in autism spectrum disorder

    Directory of Open Access Journals (Sweden)

    Yu Zhao

    2016-01-01

    Full Text Available Understanding the organizational architecture of human brain function and its alteration patterns in diseased brains such as Autism Spectrum Disorder (ASD patients are of great interests. In-vivo functional magnetic resonance imaging (fMRI offers a unique window to investigate the mechanism of brain function and to identify functional network components of the human brain. Previously, we have shown that multiple concurrent functional networks can be derived from fMRI signals using whole-brain sparse representation. Yet it is still an open question to derive group-wise consistent networks featured in ASD patients and controls. Here we proposed an effective volumetric network descriptor, named connectivity map, to compactly describe spatial patterns of brain network maps and implemented a fast framework in Apache Spark environment that can effectively identify group-wise consistent networks in big fMRI dataset. Our experiment results identified 144 group-wisely common intrinsic connectivity networks (ICNs shared between ASD patients and healthy control subjects, where some ICNs are substantially different between the two groups. Moreover, further analysis on the functional connectivity and spatial overlap between these 144 common ICNs reveals connectomics signatures characterizing ASD patients and controls. In particular, the computing time of our Spark-enabled functional connectomics framework is significantly reduced from 240 hours (C++ code, single core to 20 hours, exhibiting a great potential to handle fMRI big data in the future.

  6. Consistency of the tachyon warm inflationary universe models

    International Nuclear Information System (INIS)

    Zhang, Xiao-Min; Zhu, Jian-Yang

    2014-01-01

    This study concerns the consistency of the tachyon warm inflationary models. A linear stability analysis is performed to find the slow-roll conditions, characterized by the potential slow-roll (PSR) parameters, for the existence of a tachyon warm inflationary attractor in the system. The PSR parameters in the tachyon warm inflationary models are redefined. Two cases, an exponential potential and an inverse power-law potential, are studied, when the dissipative coefficient Γ = Γ 0 and Γ = Γ(φ), respectively. A crucial condition is obtained for a tachyon warm inflationary model characterized by the Hubble slow-roll (HSR) parameter ε H , and the condition is extendable to some other inflationary models as well. A proper number of e-folds is obtained in both cases of the tachyon warm inflation, in contrast to existing works. It is also found that a constant dissipative coefficient (Γ = Γ 0 ) is usually not a suitable assumption for a warm inflationary model

  7. Modeling self-consistent multi-class dynamic traffic flow

    Science.gov (United States)

    Cho, Hsun-Jung; Lo, Shih-Ching

    2002-09-01

    In this study, we present a systematic self-consistent multiclass multilane traffic model derived from the vehicular Boltzmann equation and the traffic dispersion model. The multilane domain is considered as a two-dimensional space and the interaction among vehicles in the domain is described by a dispersion model. The reason we consider a multilane domain as a two-dimensional space is that the driving behavior of road users may not be restricted by lanes, especially motorcyclists. The dispersion model, which is a nonlinear Poisson equation, is derived from the car-following theory and the equilibrium assumption. Under the concept that all kinds of users share the finite section, the density is distributed on a road by the dispersion model. In addition, the dynamic evolution of the traffic flow is determined by the systematic gas-kinetic model derived from the Boltzmann equation. Multiplying Boltzmann equation by the zeroth, first- and second-order moment functions, integrating both side of the equation and using chain rules, we can derive continuity, motion and variance equation, respectively. However, the second-order moment function, which is the square of the individual velocity, is employed by previous researches does not have physical meaning in traffic flow. Although the second-order expansion results in the velocity variance equation, additional terms may be generated. The velocity variance equation we propose is derived from multiplying Boltzmann equation by the individual velocity variance. It modifies the previous model and presents a new gas-kinetic traffic flow model. By coupling the gas-kinetic model and the dispersion model, a self-consistent system is presented.

  8. Detection and quantification of flow consistency in business process models.

    Science.gov (United States)

    Burattin, Andrea; Bernstein, Vered; Neurauter, Manuel; Soffer, Pnina; Weber, Barbara

    2018-01-01

    Business process models abstract complex business processes by representing them as graphical models. Their layout, as determined by the modeler, may have an effect when these models are used. However, this effect is currently not fully understood. In order to systematically study this effect, a basic set of measurable key visual features is proposed, depicting the layout properties that are meaningful to the human user. The aim of this research is thus twofold: first, to empirically identify key visual features of business process models which are perceived as meaningful to the user and second, to show how such features can be quantified into computational metrics, which are applicable to business process models. We focus on one particular feature, consistency of flow direction, and show the challenges that arise when transforming it into a precise metric. We propose three different metrics addressing these challenges, each following a different view of flow consistency. We then report the results of an empirical evaluation, which indicates which metric is more effective in predicting the human perception of this feature. Moreover, two other automatic evaluations describing the performance and the computational capabilities of our metrics are reported as well.

  9. Consistency Across Standards or Standards in a New Business Model

    Science.gov (United States)

    Russo, Dane M.

    2010-01-01

    Presentation topics include: standards in a changing business model, the new National Space Policy is driving change, a new paradigm for human spaceflight, consistency across standards, the purpose of standards, danger of over-prescriptive standards, a balance is needed (between prescriptive and general standards), enabling versus inhibiting, characteristics of success-oriented standards, characteristics of success-oriented standards, and conclusions. Additional slides include NASA Procedural Requirements 8705.2B identifies human rating standards and requirements, draft health and medical standards for human rating, what's been done, government oversight models, examples of consistency from anthropometry, examples of inconsistency from air quality and appendices of government and non-governmental human factors standards.

  10. Self-consistent approach for neutral community models with speciation

    Science.gov (United States)

    Haegeman, Bart; Etienne, Rampal S.

    2010-03-01

    Hubbell’s neutral model provides a rich theoretical framework to study ecological communities. By incorporating both ecological and evolutionary time scales, it allows us to investigate how communities are shaped by speciation processes. The speciation model in the basic neutral model is particularly simple, describing speciation as a point-mutation event in a birth of a single individual. The stationary species abundance distribution of the basic model, which can be solved exactly, fits empirical data of distributions of species’ abundances surprisingly well. More realistic speciation models have been proposed such as the random-fission model in which new species appear by splitting up existing species. However, no analytical solution is available for these models, impeding quantitative comparison with data. Here, we present a self-consistent approximation method for neutral community models with various speciation modes, including random fission. We derive explicit formulas for the stationary species abundance distribution, which agree very well with simulations. We expect that our approximation method will be useful to study other speciation processes in neutral community models as well.

  11. Consistent Partial Least Squares Path Modeling via Regularization

    Directory of Open Access Journals (Sweden)

    Sunho Jung

    2018-02-01

    Full Text Available Partial least squares (PLS path modeling is a component-based structural equation modeling that has been adopted in social and psychological research due to its data-analytic capability and flexibility. A recent methodological advance is consistent PLS (PLSc, designed to produce consistent estimates of path coefficients in structural models involving common factors. In practice, however, PLSc may frequently encounter multicollinearity in part because it takes a strategy of estimating path coefficients based on consistent correlations among independent latent variables. PLSc has yet no remedy for this multicollinearity problem, which can cause loss of statistical power and accuracy in parameter estimation. Thus, a ridge type of regularization is incorporated into PLSc, creating a new technique called regularized PLSc. A comprehensive simulation study is conducted to evaluate the performance of regularized PLSc as compared to its non-regularized counterpart in terms of power and accuracy. The results show that our regularized PLSc is recommended for use when serious multicollinearity is present.

  12. A Consistent Pricing Model for Index Options and Volatility Derivatives

    DEFF Research Database (Denmark)

    Kokholm, Thomas

    to be priced consistently, while allowing for jumps in volatility and returns. An affine specification using Lévy processes as building blocks leads to analytically tractable pricing formulas for volatility derivatives, such as VIX options, as well as efficient numerical methods for pricing of European options...... on the underlying asset. The model has the convenient feature of decoupling the vanilla skews from spot/volatility correlations and allowing for different conditional correlations in large and small spot/volatility moves. We show that our model can simultaneously fit prices of European options on S&P 500 across...

  13. A Consistent Pricing Model for Index Options and Volatility Derivatives

    DEFF Research Database (Denmark)

    Cont, Rama; Kokholm, Thomas

    2013-01-01

    to be priced consistently, while allowing for jumps in volatility and returns. An affine specification using Lévy processes as building blocks leads to analytically tractable pricing formulas for volatility derivatives, such as VIX options, as well as efficient numerical methods for pricing of European options...... on the underlying asset. The model has the convenient feature of decoupling the vanilla skews from spot/volatility correlations and allowing for different conditional correlations in large and small spot/volatility moves. We show that our model can simultaneously fit prices of European options on S&P 500 across...

  14. A Consistent Pricing Model for Index Options and Volatility Derivatives

    DEFF Research Database (Denmark)

    Cont, Rama; Kokholm, Thomas

    observed properties of variance swap dynamics and allows for jumps in volatility and returns. An affine specification using L´evy processes as building blocks leads to analytically tractable pricing formulas for options on variance swaps as well as efficient numerical methods for pricing of European......We propose and study a flexible modeling framework for the joint dynamics of an index and a set of forward variance swap rates written on this index, allowing options on forward variance swaps and options on the underlying index to be priced consistently. Our model reproduces various empirically...... options on the underlying asset. The model has the convenient feature of decoupling the vanilla skews from spot/volatility correlations and allowing for different conditional correlations in large and small spot/volatility moves. We show that our model can simultaneously fit prices of European options...

  15. Development of a Consistent and Reproducible Porcine Scald Burn Model

    Science.gov (United States)

    Kempf, Margit; Kimble, Roy; Cuttle, Leila

    2016-01-01

    There are very few porcine burn models that replicate scald injuries similar to those encountered by children. We have developed a robust porcine burn model capable of creating reproducible scald burns for a wide range of burn conditions. The study was conducted with juvenile Large White pigs, creating replicates of burn combinations; 50°C for 1, 2, 5 and 10 minutes and 60°C, 70°C, 80°C and 90°C for 5 seconds. Visual wound examination, biopsies and Laser Doppler Imaging were performed at 1, 24 hours and at 3 and 7 days post-burn. A consistent water temperature was maintained within the scald device for long durations (49.8 ± 0.1°C when set at 50°C). The macroscopic and histologic appearance was consistent between replicates of burn conditions. For 50°C water, 10 minute duration burns showed significantly deeper tissue injury than all shorter durations at 24 hours post-burn (p ≤ 0.0001), with damage seen to increase until day 3 post-burn. For 5 second duration burns, by day 7 post-burn the 80°C and 90°C scalds had damage detected significantly deeper in the tissue than the 70°C scalds (p ≤ 0.001). A reliable and safe model of porcine scald burn injury has been successfully developed. The novel apparatus with continually refreshed water improves consistency of scald creation for long exposure times. This model allows the pathophysiology of scald burn wound creation and progression to be examined. PMID:27612153

  16. A Dynamic Linear Hashing Method for Redundancy Management in Train Ethernet Consist Network

    Directory of Open Access Journals (Sweden)

    Xiaobo Nie

    2016-01-01

    Full Text Available Massive transportation systems like trains are considered critical systems because they use the communication network to control essential subsystems on board. Critical system requires zero recovery time when a failure occurs in a communication network. The newly published IEC62439-3 defines the high-availability seamless redundancy protocol, which fulfills this requirement and ensures no frame loss in the presence of an error. This paper adopts these for train Ethernet consist network. The challenge is management of the circulating frames, capable of dealing with real-time processing requirements, fast switching times, high throughout, and deterministic behavior. The main contribution of this paper is the in-depth analysis it makes of network parameters imposed by the application of the protocols to train control and monitoring system (TCMS and the redundant circulating frames discarding method based on a dynamic linear hashing, using the fastest method in order to resolve all the issues that are dealt with.

  17. Thermodynamically consistent model of brittle oil shales under overpressure

    Science.gov (United States)

    Izvekov, Oleg

    2016-04-01

    The concept of dual porosity is a common way for simulation of oil shale production. In the frame of this concept the porous fractured media is considered as superposition of two permeable continua with mass exchange. As a rule the concept doesn't take into account such as the well-known phenomenon as slip along natural fractures, overpressure in low permeability matrix and so on. Overpressure can lead to development of secondary fractures in low permeability matrix in the process of drilling and pressure reduction during production. In this work a new thermodynamically consistent model which generalizes the model of dual porosity is proposed. Particularities of the model are as follows. The set of natural fractures is considered as permeable continuum. Damage mechanics is applied to simulation of secondary fractures development in low permeability matrix. Slip along natural fractures is simulated in the frame of plasticity theory with Drucker-Prager criterion.

  18. Large scale Bayesian nuclear data evaluation with consistent model defects

    International Nuclear Information System (INIS)

    Schnabel, G

    2015-01-01

    Monte Carlo sampling schemes of available evaluation methods. The second improvement concerns Bayesian evaluation methods based on a certain simplification of the nuclear model. These methods were restricted to the consistent evaluation of tens of thousands of observables. In this thesis, a new evaluation scheme has been developed, which is mathematically equivalent to existing methods, but allows the consistent evaluation of dozens of millions of observables. The new scheme is suited for the implementation as a database application. The realization of such an application with public access can help to accelerate the production of reliable nuclear data sets. Furthermore, in combination with the novel treatment of model deficiencies, problems of the model and the experimental data can be tracked down without user interaction. This feature can foster the development of nuclear models with high predictive power. (author) [de

  19. Analytic Intermodel Consistent Modeling of Volumetric Human Lung Dynamics.

    Science.gov (United States)

    Ilegbusi, Olusegun; Seyfi, Behnaz; Neylon, John; Santhanam, Anand P

    2015-10-01

    Human lung undergoes breathing-induced deformation in the form of inhalation and exhalation. Modeling the dynamics is numerically complicated by the lack of information on lung elastic behavior and fluid-structure interactions between air and the tissue. A mathematical method is developed to integrate deformation results from a deformable image registration (DIR) and physics-based modeling approaches in order to represent consistent volumetric lung dynamics. The computational fluid dynamics (CFD) simulation assumes the lung is a poro-elastic medium with spatially distributed elastic property. Simulation is performed on a 3D lung geometry reconstructed from four-dimensional computed tomography (4DCT) dataset of a human subject. The heterogeneous Young's modulus (YM) is estimated from a linear elastic deformation model with the same lung geometry and 4D lung DIR. The deformation obtained from the CFD is then coupled with the displacement obtained from the 4D lung DIR by means of the Tikhonov regularization (TR) algorithm. The numerical results include 4DCT registration, CFD, and optimal displacement data which collectively provide consistent estimate of the volumetric lung dynamics. The fusion method is validated by comparing the optimal displacement with the results obtained from the 4DCT registration.

  20. A model for cytoplasmic rheology consistent with magnetic twisting cytometry.

    Science.gov (United States)

    Butler, J P; Kelly, S M

    1998-01-01

    Magnetic twisting cytometry is gaining wide applicability as a tool for the investigation of the rheological properties of cells and the mechanical properties of receptor-cytoskeletal interactions. Current technology involves the application and release of magnetically induced torques on small magnetic particles bound to or inside cells, with measurements of the resulting angular rotation of the particles. The properties of purely elastic or purely viscous materials can be determined by the angular strain and strain rate, respectively. However, the cytoskeleton and its linkage to cell surface receptors display elastic, viscous, and even plastic deformation, and the simultaneous characterization of these properties using only elastic or viscous models is internally inconsistent. Data interpretation is complicated by the fact that in current technology, the applied torques are not constant in time, but decrease as the particles rotate. This paper describes an internally consistent model consisting of a parallel viscoelastic element in series with a parallel viscoelastic element, and one approach to quantitative parameter evaluation. The unified model reproduces all essential features seen in data obtained from a wide variety of cell populations, and contains the pure elastic, viscoelastic, and viscous cases as subsets.

  1. A self-consistent model of an isothermal tokamak

    Science.gov (United States)

    McNamara, Steven; Lilley, Matthew

    2014-10-01

    Continued progress in liquid lithium coating technologies have made the development of a beam driven tokamak with minimal edge recycling a feasibly possibility. Such devices are characterised by improved confinement due to their inherent stability and the suppression of thermal conduction. Particle and energy confinement become intrinsically linked and the plasma thermal energy content is governed by the injected beam. A self-consistent model of a purely beam fuelled isothermal tokamak is presented, including calculations of the density profile, bulk species temperature ratios and the fusion output. Stability considerations constrain the operating parameters and regions of stable operation are identified and their suitability to potential reactor applications discussed.

  2. Mean-field theory and self-consistent dynamo modeling

    International Nuclear Information System (INIS)

    Yoshizawa, Akira; Yokoi, Nobumitsu

    2001-12-01

    Mean-field theory of dynamo is discussed with emphasis on the statistical formulation of turbulence effects on the magnetohydrodynamic equations and the construction of a self-consistent dynamo model. The dynamo mechanism is sought in the combination of the turbulent residual-helicity and cross-helicity effects. On the basis of this mechanism, discussions are made on the generation of planetary magnetic fields such as geomagnetic field and sunspots and on the occurrence of flow by magnetic fields in planetary and fusion phenomena. (author)

  3. A self-consistent spin-diffusion model for micromagnetics

    KAUST Repository

    Abert, Claas; Ruggeri, Michele; Bruckner, Florian; Vogler, Christoph; Manchon, Aurelien; Praetorius, Dirk; Suess, Dieter

    2016-01-01

    We propose a three-dimensional micromagnetic model that dynamically solves the Landau-Lifshitz-Gilbert equation coupled to the full spin-diffusion equation. In contrast to previous methods, we solve for the magnetization dynamics and the electric potential in a self-consistent fashion. This treatment allows for an accurate description of magnetization dependent resistance changes. Moreover, the presented algorithm describes both spin accumulation due to smooth magnetization transitions and due to material interfaces as in multilayer structures. The model and its finite-element implementation are validated by current driven motion of a magnetic vortex structure. In a second experiment, the resistivity of a magnetic multilayer structure in dependence of the tilting angle of the magnetization in the different layers is investigated. Both examples show good agreement with reference simulations and experiments respectively.

  4. Self-consistent modeling of amorphous silicon devices

    International Nuclear Information System (INIS)

    Hack, M.

    1987-01-01

    The authors developed a computer model to describe the steady-state behaviour of a range of amorphous silicon devices. It is based on the complete set of transport equations and takes into account the important role played by the continuous distribution of localized states in the mobility gap of amorphous silicon. Using one set of parameters they have been able to self-consistently simulate the current-voltage characteristics of p-i-n (or n-i-p) solar cells under illumination, the dark behaviour of field-effect transistors, p-i-n diodes and n-i-n diodes in both the ohmic and space charge limited regimes. This model also describes the steady-state photoconductivity of amorphous silicon, in particular, its dependence on temperature, doping and illumination intensity

  5. A self-consistent spin-diffusion model for micromagnetics

    KAUST Repository

    Abert, Claas

    2016-12-17

    We propose a three-dimensional micromagnetic model that dynamically solves the Landau-Lifshitz-Gilbert equation coupled to the full spin-diffusion equation. In contrast to previous methods, we solve for the magnetization dynamics and the electric potential in a self-consistent fashion. This treatment allows for an accurate description of magnetization dependent resistance changes. Moreover, the presented algorithm describes both spin accumulation due to smooth magnetization transitions and due to material interfaces as in multilayer structures. The model and its finite-element implementation are validated by current driven motion of a magnetic vortex structure. In a second experiment, the resistivity of a magnetic multilayer structure in dependence of the tilting angle of the magnetization in the different layers is investigated. Both examples show good agreement with reference simulations and experiments respectively.

  6. Self-consistent modeling of electron cyclotron resonance ion sources

    International Nuclear Information System (INIS)

    Girard, A.; Hitz, D.; Melin, G.; Serebrennikov, K.; Lecot, C.

    2004-01-01

    In order to predict the performances of electron cyclotron resonance ion source (ECRIS), it is necessary to perfectly model the different parts of these sources: (i) magnetic configuration; (ii) plasma characteristics; (iii) extraction system. The magnetic configuration is easily calculated via commercial codes; different codes also simulate the ion extraction, either in two dimension, or even in three dimension (to take into account the shape of the plasma at the extraction influenced by the hexapole). However the characteristics of the plasma are not always mastered. This article describes the self-consistent modeling of ECRIS: we have developed a code which takes into account the most important construction parameters: the size of the plasma (length, diameter), the mirror ratio and axial magnetic profile, whether a biased probe is installed or not. These input parameters are used to feed a self-consistent code, which calculates the characteristics of the plasma: electron density and energy, charge state distribution, plasma potential. The code is briefly described, and some of its most interesting results are presented. Comparisons are made between the calculations and the results obtained experimentally

  7. Self-consistent modeling of electron cyclotron resonance ion sources

    Science.gov (United States)

    Girard, A.; Hitz, D.; Melin, G.; Serebrennikov, K.; Lécot, C.

    2004-05-01

    In order to predict the performances of electron cyclotron resonance ion source (ECRIS), it is necessary to perfectly model the different parts of these sources: (i) magnetic configuration; (ii) plasma characteristics; (iii) extraction system. The magnetic configuration is easily calculated via commercial codes; different codes also simulate the ion extraction, either in two dimension, or even in three dimension (to take into account the shape of the plasma at the extraction influenced by the hexapole). However the characteristics of the plasma are not always mastered. This article describes the self-consistent modeling of ECRIS: we have developed a code which takes into account the most important construction parameters: the size of the plasma (length, diameter), the mirror ratio and axial magnetic profile, whether a biased probe is installed or not. These input parameters are used to feed a self-consistent code, which calculates the characteristics of the plasma: electron density and energy, charge state distribution, plasma potential. The code is briefly described, and some of its most interesting results are presented. Comparisons are made between the calculations and the results obtained experimentally.

  8. Texture synthesis using convolutional neural networks with long-range consistency and spectral constraints

    NARCIS (Netherlands)

    Schreiber, Shaun; Geldenhuys, Jaco; Villiers, De Hendrik

    2017-01-01

    Procedural texture generation enables the creation of more rich and detailed virtual environments without the help of an artist. However, finding a flexible generative model of real world textures remains an open problem. We present a novel Convolutional Neural Network based texture model

  9. Attractive target wave patterns in complex networks consisting of excitable nodes

    International Nuclear Information System (INIS)

    Zhang Li-Sheng; Mi Yuan-Yuan; Liao Xu-Hong; Qian Yu; Hu Gang

    2014-01-01

    This review describes the investigations of oscillatory complex networks consisting of excitable nodes, focusing on the target wave patterns or say the target wave attractors. A method of dominant phase advanced driving (DPAD) is introduced to reveal the dynamic structures in the networks supporting oscillations, such as the oscillation sources and the main excitation propagation paths from the sources to the whole networks. The target center nodes and their drivers are regarded as the key nodes which can completely determine the corresponding target wave patterns. Therefore, the center (say node A) and its driver (say node B) of a target wave can be used as a label, (A,B), of the given target pattern. The label can give a clue to conveniently retrieve, suppress, and control the target waves. Statistical investigations, both theoretically from the label analysis and numerically from direct simulations of network dynamics, show that there exist huge numbers of target wave attractors in excitable complex networks if the system size is large, and all these attractors can be labeled and easily controlled based on the information given by the labels. The possible applications of the physical ideas and the mathematical methods about multiplicity and labelability of attractors to memory problems of neural networks are briefly discussed. (topical review - statistical physics and complex systems)

  10. A self-consistent upward leader propagation model

    International Nuclear Information System (INIS)

    Becerra, Marley; Cooray, Vernon

    2006-01-01

    The knowledge of the initiation and propagation of an upward moving connecting leader in the presence of a downward moving lightning stepped leader is a must in the determination of the lateral attraction distance of a lightning flash by any grounded structure. Even though different models that simulate this phenomenon are available in the literature, they do not take into account the latest developments in the physics of leader discharges. The leader model proposed here simulates the advancement of positive upward leaders by appealing to the presently understood physics of that process. The model properly simulates the upward continuous progression of the positive connecting leaders from its inception to the final connection with the downward stepped leader (final jump). Thus, the main physical properties of upward leaders, namely the charge per unit length, the injected current, the channel gradient and the leader velocity are self-consistently obtained. The obtained results are compared with an altitude triggered lightning experiment and there is good agreement between the model predictions and the measured leader current and the experimentally inferred spatial and temporal location of the final jump. It is also found that the usual assumption of constant charge per unit length, based on laboratory experiments, is not valid for lightning upward connecting leaders

  11. Evaluation of EOR Processes Using Network Models

    DEFF Research Database (Denmark)

    Winter, Anatol; Larsen, Jens Kjell; Krogsbøll, Anette

    1998-01-01

    The report consists of the following parts: 1) Studies of wetting properties of model fluids and fluid mixtures aimed at an optimal selection of candidates for micromodel experiments. 2) Experimental studies of multiphase transport properties using physical models of porous networks (micromodels......) including estimation of their "petrophysical" properties (e.g. absolute permeability). 3) Mathematical modelling and computer studies of multiphase transport through pore space using mathematical network models. 4) Investigation of link between pore-scale and macroscopic recovery mechanisms....

  12. Statistical Models for Social Networks

    NARCIS (Netherlands)

    Snijders, Tom A. B.; Cook, KS; Massey, DS

    2011-01-01

    Statistical models for social networks as dependent variables must represent the typical network dependencies between tie variables such as reciprocity, homophily, transitivity, etc. This review first treats models for single (cross-sectionally observed) networks and then for network dynamics. For

  13. Classical and Quantum Consistency of the DGP Model

    CERN Document Server

    Nicolis, A; Nicolis, Alberto; Rattazzi, Riccardo

    2004-01-01

    We study the Dvali-Gabadadze-Porrati model by the method of the boundary effective action. The truncation of this action to the bending mode \\pi consistently describes physics in a wide range of regimes both at the classical and at the quantum level. The Vainshtein effect, which restores agreement with precise tests of general relativity, follows straightforwardly. We give a simple and general proof of stability, i.e. absence of ghosts in the fluctuations, valid for most of the relevant cases, like for instance the spherical source in asymptotically flat space. However we confirm that around certain interesting self-accelerating cosmological solutions there is a ghost. We consider the issue of quantum corrections. Around flat space \\pi becomes strongly coupled below a macroscopic length of 1000 km, thus impairing the predictivity of the model. Indeed the tower of higher dimensional operators which is expected by a generic UV completion of the model limits predictivity at even larger length scales. We outline ...

  14. Self-Consistent Dynamical Model of the Broad Line Region

    Energy Technology Data Exchange (ETDEWEB)

    Czerny, Bozena [Center for Theoretical Physics, Polish Academy of Sciences, Warsaw (Poland); Li, Yan-Rong [Key Laboratory for Particle Astrophysics, Institute of High Energy Physics, Chinese Academy of Sciences, Beijing (China); Sredzinska, Justyna; Hryniewicz, Krzysztof [Copernicus Astronomical Center, Polish Academy of Sciences, Warsaw (Poland); Panda, Swayam [Center for Theoretical Physics, Polish Academy of Sciences, Warsaw (Poland); Copernicus Astronomical Center, Polish Academy of Sciences, Warsaw (Poland); Wildy, Conor [Center for Theoretical Physics, Polish Academy of Sciences, Warsaw (Poland); Karas, Vladimir, E-mail: bcz@cft.edu.pl [Astronomical Institute, Czech Academy of Sciences, Prague (Czech Republic)

    2017-06-22

    We develop a self-consistent description of the Broad Line Region based on the concept of a failed wind powered by radiation pressure acting on a dusty accretion disk atmosphere in Keplerian motion. The material raised high above the disk is illuminated, dust evaporates, and the matter falls back toward the disk. This material is the source of emission lines. The model predicts the inner and outer radius of the region, the cloud dynamics under the dust radiation pressure and, subsequently, the gravitational field of the central black hole, which results in asymmetry between the rise and fall. Knowledge of the dynamics allows us to predict the shapes of the emission lines as functions of the basic parameters of an active nucleus: black hole mass, accretion rate, black hole spin (or accretion efficiency) and the viewing angle with respect to the symmetry axis. Here we show preliminary results based on analytical approximations to the cloud motion.

  15. Self-Consistent Dynamical Model of the Broad Line Region

    Directory of Open Access Journals (Sweden)

    Bozena Czerny

    2017-06-01

    Full Text Available We develop a self-consistent description of the Broad Line Region based on the concept of a failed wind powered by radiation pressure acting on a dusty accretion disk atmosphere in Keplerian motion. The material raised high above the disk is illuminated, dust evaporates, and the matter falls back toward the disk. This material is the source of emission lines. The model predicts the inner and outer radius of the region, the cloud dynamics under the dust radiation pressure and, subsequently, the gravitational field of the central black hole, which results in asymmetry between the rise and fall. Knowledge of the dynamics allows us to predict the shapes of the emission lines as functions of the basic parameters of an active nucleus: black hole mass, accretion rate, black hole spin (or accretion efficiency and the viewing angle with respect to the symmetry axis. Here we show preliminary results based on analytical approximations to the cloud motion.

  16. Consistent constraints on the Standard Model Effective Field Theory

    International Nuclear Information System (INIS)

    Berthier, Laure; Trott, Michael

    2016-01-01

    We develop the global constraint picture in the (linear) effective field theory generalisation of the Standard Model, incorporating data from detectors that operated at PEP, PETRA, TRISTAN, SpS, Tevatron, SLAC, LEPI and LEP II, as well as low energy precision data. We fit one hundred and three observables. We develop a theory error metric for this effective field theory, which is required when constraints on parameters at leading order in the power counting are to be pushed to the percent level, or beyond, unless the cut off scale is assumed to be large, Λ≳ 3 TeV. We more consistently incorporate theoretical errors in this work, avoiding this assumption, and as a direct consequence bounds on some leading parameters are relaxed. We show how an S,T analysis is modified by the theory errors we include as an illustrative example.

  17. Thermodynamically consistent mesoscopic model of the ferro/paramagnetic transition

    Czech Academy of Sciences Publication Activity Database

    Benešová, Barbora; Kružík, Martin; Roubíček, Tomáš

    2013-01-01

    Roč. 64, Č. 1 (2013), s. 1-28 ISSN 0044-2275 R&D Projects: GA AV ČR IAA100750802; GA ČR GA106/09/1573; GA ČR GAP201/10/0357 Grant - others:GA ČR(CZ) GA106/08/1397; GA MŠk(CZ) LC06052 Program:GA; LC Institutional support: RVO:67985556 Keywords : ferro-para-magnetism * evolution * thermodynamics Subject RIV: BA - General Mathematics; BA - General Mathematics (UT-L) Impact factor: 1.214, year: 2013 http://library.utia.cas.cz/separaty/2012/MTR/kruzik-thermodynamically consistent mesoscopic model of the ferro-paramagnetic transition.pdf

  18. Creation of Consistent Burn Wounds: A Rat Model

    Directory of Open Access Journals (Sweden)

    Elijah Zhengyang Cai

    2014-07-01

    Full Text Available Background Burn infliction techniques are poorly described in rat models. An accurate study can only be achieved with wounds that are uniform in size and depth. We describe a simple reproducible method for creating consistent burn wounds in rats. Methods Ten male Sprague-Dawley rats were anesthetized and dorsum shaved. A 100 g cylindrical stainless-steel rod (1 cm diameter was heated to 100℃ in boiling water. Temperature was monitored using a thermocouple. We performed two consecutive toe-pinch tests on different limbs to assess the depth of sedation. Burn infliction was limited to the loin. The skin was pulled upwards, away from the underlying viscera, creating a flat surface. The rod rested on its own weight for 5, 10, and 20 seconds at three different sites on each rat. Wounds were evaluated for size, morphology and depth. Results Average wound size was 0.9957 cm2 (standard deviation [SD] 0.1845 (n=30. Wounds created with duration of 5 seconds were pale, with an indistinct margin of erythema. Wounds of 10 and 20 seconds were well-defined, uniformly brown with a rim of erythema. Average depths of tissue damage were 1.30 mm (SD 0.424, 2.35 mm (SD 0.071, and 2.60 mm (SD 0.283 for duration of 5, 10, 20 seconds respectively. Burn duration of 5 seconds resulted in full-thickness damage. Burn duration of 10 seconds and 20 seconds resulted in full-thickness damage, involving subjacent skeletal muscle. Conclusions This is a simple reproducible method for creating burn wounds consistent in size and depth in a rat burn model.

  19. Self-Consistent Scheme for Spike-Train Power Spectra in Heterogeneous Sparse Networks.

    Science.gov (United States)

    Pena, Rodrigo F O; Vellmer, Sebastian; Bernardi, Davide; Roque, Antonio C; Lindner, Benjamin

    2018-01-01

    Recurrent networks of spiking neurons can be in an asynchronous state characterized by low or absent cross-correlations and spike statistics which resemble those of cortical neurons. Although spatial correlations are negligible in this state, neurons can show pronounced temporal correlations in their spike trains that can be quantified by the autocorrelation function or the spike-train power spectrum. Depending on cellular and network parameters, correlations display diverse patterns (ranging from simple refractory-period effects and stochastic oscillations to slow fluctuations) and it is generally not well-understood how these dependencies come about. Previous work has explored how the single-cell correlations in a homogeneous network (excitatory and inhibitory integrate-and-fire neurons with nearly balanced mean recurrent input) can be determined numerically from an iterative single-neuron simulation. Such a scheme is based on the fact that every neuron is driven by the network noise (i.e., the input currents from all its presynaptic partners) but also contributes to the network noise, leading to a self-consistency condition for the input and output spectra. Here we first extend this scheme to homogeneous networks with strong recurrent inhibition and a synaptic filter, in which instabilities of the previous scheme are avoided by an averaging procedure. We then extend the scheme to heterogeneous networks in which (i) different neural subpopulations (e.g., excitatory and inhibitory neurons) have different cellular or connectivity parameters; (ii) the number and strength of the input connections are random (Erdős-Rényi topology) and thus different among neurons. In all heterogeneous cases, neurons are lumped in different classes each of which is represented by a single neuron in the iterative scheme; in addition, we make a Gaussian approximation of the input current to the neuron. These approximations seem to be justified over a broad range of parameters as

  20. Self-Consistent Scheme for Spike-Train Power Spectra in Heterogeneous Sparse Networks

    Directory of Open Access Journals (Sweden)

    Rodrigo F. O. Pena

    2018-03-01

    Full Text Available Recurrent networks of spiking neurons can be in an asynchronous state characterized by low or absent cross-correlations and spike statistics which resemble those of cortical neurons. Although spatial correlations are negligible in this state, neurons can show pronounced temporal correlations in their spike trains that can be quantified by the autocorrelation function or the spike-train power spectrum. Depending on cellular and network parameters, correlations display diverse patterns (ranging from simple refractory-period effects and stochastic oscillations to slow fluctuations and it is generally not well-understood how these dependencies come about. Previous work has explored how the single-cell correlations in a homogeneous network (excitatory and inhibitory integrate-and-fire neurons with nearly balanced mean recurrent input can be determined numerically from an iterative single-neuron simulation. Such a scheme is based on the fact that every neuron is driven by the network noise (i.e., the input currents from all its presynaptic partners but also contributes to the network noise, leading to a self-consistency condition for the input and output spectra. Here we first extend this scheme to homogeneous networks with strong recurrent inhibition and a synaptic filter, in which instabilities of the previous scheme are avoided by an averaging procedure. We then extend the scheme to heterogeneous networks in which (i different neural subpopulations (e.g., excitatory and inhibitory neurons have different cellular or connectivity parameters; (ii the number and strength of the input connections are random (Erdős-Rényi topology and thus different among neurons. In all heterogeneous cases, neurons are lumped in different classes each of which is represented by a single neuron in the iterative scheme; in addition, we make a Gaussian approximation of the input current to the neuron. These approximations seem to be justified over a broad range of

  1. Using complex networks to quantify consistency in the use of words

    International Nuclear Information System (INIS)

    Amancio, D R; Oliveira Jr, O N; Costa, L da F

    2012-01-01

    In this paper we have quantified the consistency of word usage in written texts represented by complex networks, where words were taken as nodes, by measuring the degree of preservation of the node neighborhood. Words were considered highly consistent if the authors used them with the same neighborhood. When ranked according to the consistency of use, the words obeyed a log-normal distribution, in contrast to Zipf's law that applies to the frequency of use. Consistency correlated positively with the familiarity and frequency of use, and negatively with ambiguity and age of acquisition. An inspection of some highly consistent words confirmed that they are used in very limited semantic contexts. A comparison of consistency indices for eight authors indicated that these indices may be employed for author recognition. Indeed, as expected, authors of novels could be distinguished from those who wrote scientific texts. Our analysis demonstrated the suitability of the consistency indices, which can now be applied in other tasks, such as emotion recognition

  2. Consistent model reduction of polymer chains in solution in dissipative particle dynamics: Model description

    KAUST Repository

    Moreno Chaparro, Nicolas; Nunes, Suzana Pereira; Calo, Victor M.

    2015-01-01

    considerations we explicitly account for the correlation between beads in fine-grained DPD models and consistently represent the effect of these correlations in a reduced model, in a practical and simple fashion via power laws and the consistent scaling

  3. Modeling, Optimization & Control of Hydraulic Networks

    DEFF Research Database (Denmark)

    Tahavori, Maryamsadat

    2014-01-01

    . The nonlinear network model is derived based on the circuit theory. A suitable projection is used to reduce the state vector and to express the model in standard state-space form. Then, the controllability of nonlinear nonaffine hydraulic networks is studied. The Lie algebra-based controllability matrix is used......Water supply systems consist of a number of pumping stations, which deliver water to the customers via pipeline networks and elevated reservoirs. A huge amount of drinking water is lost before it reaches to end-users due to the leakage in pipe networks. A cost effective solution to reduce leakage...... in water network is pressure management. By reducing the pressure in the water network, the leakage can be reduced significantly. Also it reduces the amount of energy consumption in water networks. The primary purpose of this work is to develop control algorithms for pressure control in water supply...

  4. Neural network consistent empirical physical formula construction for neutron–gamma discrimination in gamma ray tracking

    International Nuclear Information System (INIS)

    Yildiz, Nihat; Akkoyun, Serkan

    2013-01-01

    Highlights: ► Detector responses in neutron–gamma discrimination were estimated by neural networks. ► Novel consistent neural network empirical physical formulas (EPFs) were constructed for detector responses. ► The EPFs are of explicit mathematical functional form. ► The EPFs can be used to derive various physical functions relevant to neutron–gamma discrimination in gamma ray tracking. -- Abstract: Gamma ray tracking is an efficient detection technique in studying exotic nuclei which lies far from beta stability line. To achieve very powerful and extraordinary resolution ability, new detectors based on gamma ray tracking are currently being developed. To reach this achievement, the neutron–gamma discrimination in these detectors is also an important task. In this paper, by suitable layered feedforward neural networks (LFNNs), we have constructed novel and consistent empirical physical formulas (EPFs) for some highly nonlinear detector counts measured in neutron–gamma discrimination. The detector counts data used in the discrimination was actually borrowed from our previous paper. The counts used here had been originally measured versus the following parameters: energy deposited in the first interaction points, difference in the incoming direction of initial gamma rays, and finally figure of merit values of the clusters determined by tracking. The LFNN–EPFs are of explicit mathematical functional form. Therefore, by various suitable operations of mathematical analysis, these LFNN–EPFs can be used to derivate further physical functions which might be potentially relevant to neutron–gamma discrimination performance of gamma ray tracking.

  5. Tool wear modeling using abductive networks

    Science.gov (United States)

    Masory, Oren

    1992-09-01

    A tool wear model based on Abductive Networks, which consists of a network of `polynomial' nodes, is described. The model relates the cutting parameters, components of the cutting force, and machining time to flank wear. Thus real time measurements of the cutting force can be used to monitor the machining process. The model is obtained by a training process in which the connectivity between the network's nodes and the polynomial coefficients of each node are determined by optimizing a performance criteria. Actual wear measurements of coated and uncoated carbide inserts were used for training and evaluating the established model.

  6. Coevolutionary modeling in network formation

    KAUST Repository

    Al-Shyoukh, Ibrahim

    2014-12-03

    Network coevolution, the process of network topology evolution in feedback with dynamical processes over the network nodes, is a common feature of many engineered and natural networks. In such settings, the change in network topology occurs at a comparable time scale to nodal dynamics. Coevolutionary modeling offers the possibility to better understand how and why network structures emerge. For example, social networks can exhibit a variety of structures, ranging from almost uniform to scale-free degree distributions. While current models of network formation can reproduce these structures, coevolutionary modeling can offer a better understanding of the underlying dynamics. This paper presents an overview of recent work on coevolutionary models of network formation, with an emphasis on the following three settings: (i) dynamic flow of benefits and costs, (ii) transient link establishment costs, and (iii) latent preferential attachment.

  7. Coevolutionary modeling in network formation

    KAUST Repository

    Al-Shyoukh, Ibrahim; Chasparis, Georgios; Shamma, Jeff S.

    2014-01-01

    Network coevolution, the process of network topology evolution in feedback with dynamical processes over the network nodes, is a common feature of many engineered and natural networks. In such settings, the change in network topology occurs at a comparable time scale to nodal dynamics. Coevolutionary modeling offers the possibility to better understand how and why network structures emerge. For example, social networks can exhibit a variety of structures, ranging from almost uniform to scale-free degree distributions. While current models of network formation can reproduce these structures, coevolutionary modeling can offer a better understanding of the underlying dynamics. This paper presents an overview of recent work on coevolutionary models of network formation, with an emphasis on the following three settings: (i) dynamic flow of benefits and costs, (ii) transient link establishment costs, and (iii) latent preferential attachment.

  8. Modeling online social signed networks

    Science.gov (United States)

    Li, Le; Gu, Ke; Zeng, An; Fan, Ying; Di, Zengru

    2018-04-01

    People's online rating behavior can be modeled by user-object bipartite networks directly. However, few works have been devoted to reveal the hidden relations between users, especially from the perspective of signed networks. We analyze the signed monopartite networks projected by the signed user-object bipartite networks, finding that the networks are highly clustered with obvious community structure. Interestingly, the positive clustering coefficient is remarkably higher than the negative clustering coefficient. Then, a Signed Growing Network model (SGN) based on local preferential attachment is proposed to generate a user's signed network that has community structure and high positive clustering coefficient. Other structural properties of the modeled networks are also found to be similar to the empirical networks.

  9. Neural network tagging in a toy model

    International Nuclear Information System (INIS)

    Milek, Marko; Patel, Popat

    1999-01-01

    The purpose of this study is a comparison of Artificial Neural Network approach to HEP analysis against the traditional methods. A toy model used in this analysis consists of two types of particles defined by four generic properties. A number of 'events' was created according to the model using standard Monte Carlo techniques. Several fully connected, feed forward multi layered Artificial Neural Networks were trained to tag the model events. The performance of each network was compared to the standard analysis mechanisms and significant improvement was observed

  10. A neighbourhood evolving network model

    International Nuclear Information System (INIS)

    Cao, Y.J.; Wang, G.Z.; Jiang, Q.Y.; Han, Z.X.

    2006-01-01

    Many social, technological, biological and economical systems are best described by evolved network models. In this short Letter, we propose and study a new evolving network model. The model is based on the new concept of neighbourhood connectivity, which exists in many physical complex networks. The statistical properties and dynamics of the proposed model is analytically studied and compared with those of Barabasi-Albert scale-free model. Numerical simulations indicate that this network model yields a transition between power-law and exponential scaling, while the Barabasi-Albert scale-free model is only one of its special (limiting) cases. Particularly, this model can be used to enhance the evolving mechanism of complex networks in the real world, such as some social networks development

  11. Self-consistent Modeling of Elastic Anisotropy in Shale

    Science.gov (United States)

    Kanitpanyacharoen, W.; Wenk, H.; Matthies, S.; Vasin, R.

    2012-12-01

    Elastic anisotropy in clay-rich sedimentary rocks has increasingly received attention because of significance for prospecting of petroleum deposits, as well as seals in the context of nuclear waste and CO2 sequestration. The orientation of component minerals and pores/fractures is a critical factor that influences elastic anisotropy. In this study, we investigate lattice and shape preferred orientation (LPO and SPO) of three shales from the North Sea in UK, the Qusaiba Formation in Saudi Arabia, and the Officer Basin in Australia (referred to as N1, Qu3, and L1905, respectively) to calculate elastic properties and compare them with experimental results. Synchrotron hard X-ray diffraction and microtomography experiments were performed to quantify LPO, weight proportions, and three-dimensional SPO of constituent minerals and pores. Our preliminary results show that the degree of LPO and total amount of clays are highest in Qu3 (3.3-6.5 m.r.d and 74vol%), moderately high in N1 (2.4-5.6 m.r.d. and 70vol%), and lowest in L1905 (2.3-2.5 m.r.d. and 42vol%). In addition, porosity in Qu3 is as low as 2% while it is up to 6% in L1605 and 8% in N1, respectively. Based on this information and single crystal elastic properties of mineral components, we apply a self-consistent averaging method to calculate macroscopic elastic properties and corresponding seismic velocities for different shales. The elastic model is then compared with measured acoustic velocities on the same samples. The P-wave velocities measured from Qu3 (4.1-5.3 km/s, 26.3%Ani.) are faster than those obtained from L1905 (3.9-4.7 km/s, 18.6%Ani.) and N1 (3.6-4.3 km/s, 17.7%Ani.). By making adjustments for pore structure (aspect ratio) and single crystal elastic properties of clay minerals, a good agreement between our calculation and the ultrasonic measurement is obtained.

  12. Self-consistent modelling of resonant tunnelling structures

    DEFF Research Database (Denmark)

    Fiig, T.; Jauho, A.P.

    1992-01-01

    We report a comprehensive study of the effects of self-consistency on the I-V-characteristics of resonant tunnelling structures. The calculational method is based on a simultaneous solution of the effective-mass Schrödinger equation and the Poisson equation, and the current is evaluated...... applied voltages and carrier densities at the emitter-barrier interface. We include the two-dimensional accumulation layer charge and the quantum well charge in our self-consistent scheme. We discuss the evaluation of the current contribution originating from the two-dimensional accumulation layer charges......, and our qualitative estimates seem consistent with recent experimental studies. The intrinsic bistability of resonant tunnelling diodes is analyzed within several different approximation schemes....

  13. Self-consistent approach for neutral community models with speciation

    NARCIS (Netherlands)

    Haegeman, Bart; Etienne, Rampal S.

    Hubbell's neutral model provides a rich theoretical framework to study ecological communities. By incorporating both ecological and evolutionary time scales, it allows us to investigate how communities are shaped by speciation processes. The speciation model in the basic neutral model is

  14. Exotic nuclei in self-consistent mean-field models

    International Nuclear Information System (INIS)

    Bender, M.; Rutz, K.; Buervenich, T.; Reinhard, P.-G.; Maruhn, J. A.; Greiner, W.

    1999-01-01

    We discuss two widely used nuclear mean-field models, the relativistic mean-field model and the (nonrelativistic) Skyrme-Hartree-Fock model, and their capability to describe exotic nuclei with emphasis on neutron-rich tin isotopes and superheavy nuclei. (c) 1999 American Institute of Physics

  15. Is the island universe model consistent with observations?

    OpenAIRE

    Piao, Yun-Song

    2005-01-01

    We study the island universe model, in which initially the universe is in a cosmological constant sea, then the local quantum fluctuations violating the null energy condition create the islands of matter, some of which might corresponds to our observable universe. We examine the possibility that the island universe model is regarded as an alternative scenario of the origin of observable universe.

  16. The pairwise phase consistency in cortical network and its relationship with neuronal activation

    Directory of Open Access Journals (Sweden)

    Wang Daming

    2017-01-01

    Full Text Available Gamma-band neuronal oscillation and synchronization with the range of 30-90 Hz are ubiquitous phenomenon across numerous brain areas and various species, and correlated with plenty of cognitive functions. The phase of the oscillation, as one aspect of CTC (Communication through Coherence hypothesis, underlies various functions for feature coding, memory processing and behaviour performing. The PPC (Pairwise Phase Consistency, an improved coherence measure, statistically quantifies the strength of phase synchronization. In order to evaluate the PPC and its relationships with input stimulus, neuronal activation and firing rate, a simplified spiking neuronal network is constructed to simulate orientation columns in primary visual cortex. If the input orientation stimulus is preferred for a certain orientation column, neurons within this corresponding column will obtain higher firing rate and stronger neuronal activation, which consequently engender higher PPC values, with higher PPC corresponding to higher firing rate. In addition, we investigate the PPC in time resolved analysis with a sliding window.

  17. Thermodynamically consistent description of criticality in models of correlated electrons

    Czech Academy of Sciences Publication Activity Database

    Janiš, Václav; Kauch, Anna; Pokorný, Vladislav

    2017-01-01

    Roč. 95, č. 4 (2017), s. 1-14, č. článku 045108. ISSN 2469-9950 R&D Projects: GA ČR GA15-14259S Institutional support: RVO:68378271 Keywords : conserving approximations * Anderson model * Hubbard model * parquet equations Subject RIV: BM - Solid Matter Physics ; Magnetism OBOR OECD: Condensed matter physics (including formerly solid state physics, supercond.) Impact factor: 3.836, year: 2016

  18. Consistent Evolution of Software Artifacts and Non-Functional Models

    Science.gov (United States)

    2014-11-14

    induce bad software performance)? 15. SUBJECT TERMS EOARD, Nano particles, Photo-Acoustic Sensors, Model-Driven Engineering ( MDE ), Software Performance...Università degli Studi dell’Aquila, Via Vetoio, 67100 L’Aquila, Italy Email: vittorio.cortellessa@univaq.it Web : http: // www. di. univaq. it/ cortelle/ Phone...Model-Driven Engineering ( MDE ), Software Performance Engineering (SPE), Change Propagation, Performance Antipatterns. For sake of readability of the

  19. Phase models of galaxies consisting of disk and halo

    International Nuclear Information System (INIS)

    Osipkov, L.P.; Kutuzov, S.A.

    1987-01-01

    A method of finding the phase density of a two-component model of mass distribution is developed. The equipotential surfaces and the potential law are given. The equipotentials are lenslike surfaces with a sharp edge in the equatorial plane, which provides the existence of an imbedded thin disk in halo. The equidensity surfaces of the halo coincide with the equipotentials. Phase models for the halo and the disk are constructed separately on the basis of spatial and surface mass densities by solving the corresponding integral equations. In particular the models for the halo with finite dimensions can be constructed. The even part of the phase density in respect to velocities is only found. For the halo it depends on the energy integral as a single argument

  20. Phase models of galaxies consisting of a disk and halo

    International Nuclear Information System (INIS)

    Osipkov, L.P.; Kutuzov, S.A.

    1988-01-01

    A method is developed for finding the phase density of a two-component model of a distribution of masses. The equipotential surfaces and potential law are given. The equipotentials are lenslike surfaces with a sharp edge in the equatorial plane, this ensuring the existence of a vanishingly thin embedded disk. The equidensity surfaces of the halo coincide with the equipotentials. Phase models are constructed separately for the halo and for the disk on the basis of the spatial and surface mass densities by the solution of the corresponding integral equations. In particular, models with a halo having finite dimensions can be constructed. For both components, the part of the phase density even with respect to the velocities is found. For the halo, it depends only on the energy integral. Two examples, for which exact solutions are found, are considered

  1. A thermodynamically consistent model of shape-memory alloys

    Czech Academy of Sciences Publication Activity Database

    Benešová, Barbora

    2011-01-01

    Roč. 11, č. 1 (2011), s. 355-356 ISSN 1617-7061 R&D Projects: GA ČR GAP201/10/0357 Institutional research plan: CEZ:AV0Z20760514 Keywords : slape memory alloys * model based on relaxation * thermomechanic coupling Subject RIV: BA - General Mathematics http://onlinelibrary.wiley.com/doi/10.1002/pamm.201110169/abstract

  2. Flood damage: a model for consistent, complete and multipurpose scenarios

    Science.gov (United States)

    Menoni, Scira; Molinari, Daniela; Ballio, Francesco; Minucci, Guido; Mejri, Ouejdane; Atun, Funda; Berni, Nicola; Pandolfo, Claudia

    2016-12-01

    Effective flood risk mitigation requires the impacts of flood events to be much better and more reliably known than is currently the case. Available post-flood damage assessments usually supply only a partial vision of the consequences of the floods as they typically respond to the specific needs of a particular stakeholder. Consequently, they generally focus (i) on particular items at risk, (ii) on a certain time window after the occurrence of the flood, (iii) on a specific scale of analysis or (iv) on the analysis of damage only, without an investigation of damage mechanisms and root causes. This paper responds to the necessity of a more integrated interpretation of flood events as the base to address the variety of needs arising after a disaster. In particular, a model is supplied to develop multipurpose complete event scenarios. The model organizes available information after the event according to five logical axes. This way post-flood damage assessments can be developed that (i) are multisectoral, (ii) consider physical as well as functional and systemic damage, (iii) address the spatial scales that are relevant for the event at stake depending on the type of damage that has to be analyzed, i.e., direct, functional and systemic, (iv) consider the temporal evolution of damage and finally (v) allow damage mechanisms and root causes to be understood. All the above features are key for the multi-usability of resulting flood scenarios. The model allows, on the one hand, the rationalization of efforts currently implemented in ex post damage assessments, also with the objective of better programming financial resources that will be needed for these types of events in the future. On the other hand, integrated interpretations of flood events are fundamental to adapting and optimizing flood mitigation strategies on the basis of thorough forensic investigation of each event, as corroborated by the implementation of the model in a case study.

  3. Developing Personal Network Business Models

    DEFF Research Database (Denmark)

    Saugstrup, Dan; Henten, Anders

    2006-01-01

    The aim of the paper is to examine the issue of business modeling in relation to personal networks, PNs. The paper builds on research performed on business models in the EU 1ST MAGNET1 project (My personal Adaptive Global NET). The paper presents the Personal Network concept and briefly reports...

  4. Mathematical Modelling Plant Signalling Networks

    KAUST Repository

    Muraro, D.; Byrne, H.M.; King, J.R.; Bennett, M.J.

    2013-01-01

    methods for modelling gene and signalling networks and their application in plants. We then describe specific models of hormonal perception and cross-talk in plants. This mathematical analysis of sub-cellular molecular mechanisms paves the way for more

  5. Complex Networks in Psychological Models

    Science.gov (United States)

    Wedemann, R. S.; Carvalho, L. S. A. V. D.; Donangelo, R.

    We develop schematic, self-organizing, neural-network models to describe mechanisms associated with mental processes, by a neurocomputational substrate. These models are examples of real world complex networks with interesting general topological structures. Considering dopaminergic signal-to-noise neuronal modulation in the central nervous system, we propose neural network models to explain development of cortical map structure and dynamics of memory access, and unify different mental processes into a single neurocomputational substrate. Based on our neural network models, neurotic behavior may be understood as an associative memory process in the brain, and the linguistic, symbolic associative process involved in psychoanalytic working-through can be mapped onto a corresponding process of reconfiguration of the neural network. The models are illustrated through computer simulations, where we varied dopaminergic modulation and observed the self-organizing emergent patterns at the resulting semantic map, interpreting them as different manifestations of mental functioning, from psychotic through to normal and neurotic behavior, and creativity.

  6. Full self-consistency versus quasiparticle self-consistency in diagrammatic approaches: exactly solvable two-site Hubbard model.

    Science.gov (United States)

    Kutepov, A L

    2015-08-12

    Self-consistent solutions of Hedin's equations (HE) for the two-site Hubbard model (HM) have been studied. They have been found for three-point vertices of increasing complexity (Γ = 1 (GW approximation), Γ1 from the first-order perturbation theory, and the exact vertex Γ(E)). Comparison is made between the cases when an additional quasiparticle (QP) approximation for Green's functions is applied during the self-consistent iterative solving of HE and when QP approximation is not applied. The results obtained with the exact vertex are directly related to the present open question-which approximation is more advantageous for future implementations, GW + DMFT or QPGW + DMFT. It is shown that in a regime of strong correlations only the originally proposed GW + DMFT scheme is able to provide reliable results. Vertex corrections based on perturbation theory (PT) systematically improve the GW results when full self-consistency is applied. The application of QP self-consistency combined with PT vertex corrections shows similar problems to the case when the exact vertex is applied combined with QP sc. An analysis of Ward Identity violation is performed for all studied in this work's approximations and its relation to the general accuracy of the schemes used is provided.

  7. Spectrally-consistent regularization modeling of turbulent natural convection flows

    International Nuclear Information System (INIS)

    Trias, F Xavier; Gorobets, Andrey; Oliva, Assensi; Verstappen, Roel

    2012-01-01

    The incompressible Navier-Stokes equations constitute an excellent mathematical modelization of turbulence. Unfortunately, attempts at performing direct simulations are limited to relatively low-Reynolds numbers because of the almost numberless small scales produced by the non-linear convective term. Alternatively, a dynamically less complex formulation is proposed here. Namely, regularizations of the Navier-Stokes equations that preserve the symmetry and conservation properties exactly. To do so, both convective and diffusive terms are altered in the same vein. In this way, the convective production of small scales is effectively restrained whereas the modified diffusive term introduces a hyperviscosity effect and consequently enhances the destruction of small scales. In practice, the only additional ingredient is a self-adjoint linear filter whose local filter length is determined from the requirement that vortex-stretching must stop at the smallest grid scale. In the present work, the performance of the above-mentioned recent improvements is assessed through application to turbulent natural convection flows by means of comparison with DNS reference data.

  8. On the internal consistency of holographic dark energy models

    International Nuclear Information System (INIS)

    Horvat, R

    2008-01-01

    Holographic dark energy (HDE) models, underpinned by an effective quantum field theory (QFT) with a manifest UV/IR connection, have become convincing candidates for providing an explanation of the dark energy in the universe. On the other hand, the maximum number of quantum states that a conventional QFT for a box of size L is capable of describing relates to those boxes which are on the brink of experiencing a sudden collapse to a black hole. Another restriction on the underlying QFT is that the UV cut-off, which cannot be chosen independently of the IR cut-off and therefore becomes a function of time in a cosmological setting, should stay the largest energy scale even in the standard cosmological epochs preceding a dark energy dominated one. We show that, irrespective of whether one deals with the saturated form of HDE or takes a certain degree of non-saturation in the past, the above restrictions cannot be met in a radiation dominated universe, an epoch in the history of the universe which is expected to be perfectly describable within conventional QFT

  9. Using network screening methods to determine locations with specific safety issues: A design consistency case study.

    Science.gov (United States)

    Butsick, Andrew J; Wood, Jonathan S; Jovanis, Paul P

    2017-09-01

    The Highway Safety Manual provides multiple methods that can be used to identify sites with promise (SWiPs) for safety improvement. However, most of these methods cannot be used to identify sites with specific problems. Furthermore, given that infrastructure funding is often specified for use related to specific problems/programs, a method for identifying SWiPs related to those programs would be very useful. This research establishes a method for Identifying SWiPs with specific issues. This is accomplished using two safety performance functions (SPFs). This method is applied to identifying SWiPs with geometric design consistency issues. Mixed effects negative binomial regression was used to develop two SPFs using 5 years of crash data and over 8754km of two-lane rural roadway. The first SPF contained typical roadway elements while the second contained additional geometric design consistency parameters. After empirical Bayes adjustments, sites with promise (SWiPs) were identified. The disparity between SWiPs identified by the two SPFs was evident; 40 unique sites were identified by each model out of the top 220 segments. By comparing sites across the two models, candidate road segments can be identified where a lack design consistency may be contributing to an increase in expected crashes. Practitioners can use this method to more effectively identify roadway segments suffering from reduced safety performance due to geometric design inconsistency, with detailed engineering studies of identified sites required to confirm the initial assessment. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. Modeling of contact tracing in social networks

    Science.gov (United States)

    Tsimring, Lev S.; Huerta, Ramón

    2003-07-01

    Spreading of certain infections in complex networks is effectively suppressed by using intelligent strategies for epidemic control. One such standard epidemiological strategy consists in tracing contacts of infected individuals. In this paper, we use a recently introduced generalization of the standard susceptible-infectious-removed stochastic model for epidemics in sparse random networks which incorporates an additional (traced) state. We describe a deterministic mean-field description which yields quantitative agreement with stochastic simulations on random graphs. We also discuss the role of contact tracing in epidemics control in small-world and scale-free networks. Effectiveness of contact tracing grows as the rewiring probability is reduced.

  11. A model of coauthorship networks

    Science.gov (United States)

    Zhou, Guochang; Li, Jianping; Xie, Zonglin

    2017-10-01

    A natural way of representing the coauthorship of authors is to use a generalization of graphs known as hypergraphs. A random geometric hypergraph model is proposed here to model coauthorship networks, which is generated by placing nodes on a region of Euclidean space randomly and uniformly, and connecting some nodes if the nodes satisfy particular geometric conditions. Two kinds of geometric conditions are designed to model the collaboration patterns of academic authorities and basic researches respectively. The conditions give geometric expressions of two causes of coauthorship: the authority and similarity of authors. By simulation and calculus, we show that the forepart of the degree distribution of the network generated by the model is mixture Poissonian, and the tail is power-law, which are similar to these of some coauthorship networks. Further, we show more similarities between the generated network and real coauthorship networks: the distribution of cardinalities of hyperedges, high clustering coefficient, assortativity, and small-world property

  12. Development of a self-consistent lightning NOx simulation in large-scale 3-D models

    Science.gov (United States)

    Luo, Chao; Wang, Yuhang; Koshak, William J.

    2017-03-01

    We seek to develop a self-consistent representation of lightning NOx (LNOx) simulation in a large-scale 3-D model. Lightning flash rates are parameterized functions of meteorological variables related to convection. We examine a suite of such variables and find that convective available potential energy and cloud top height give the best estimates compared to July 2010 observations from ground-based lightning observation networks. Previous models often use lightning NOx vertical profiles derived from cloud-resolving model simulations. An implicit assumption of such an approach is that the postconvection lightning NOx vertical distribution is the same for all deep convection, regardless of geographic location, time of year, or meteorological environment. Detailed observations of the lightning channel segment altitude distribution derived from the NASA Lightning Nitrogen Oxides Model can be used to obtain the LNOx emission profile. Coupling such a profile with model convective transport leads to a more self-consistent lightning distribution compared to using prescribed postconvection profiles. We find that convective redistribution appears to be a more important factor than preconvection LNOx profile selection, providing another reason for linking the strength of convective transport to LNOx distribution.

  13. Telecommunications network modelling, planning and design

    CERN Document Server

    Evans, Sharon

    2003-01-01

    Telecommunication Network Modelling, Planning and Design addresses sophisticated modelling techniques from the perspective of the communications industry and covers some of the major issues facing telecommunications network engineers and managers today. Topics covered include network planning for transmission systems, modelling of SDH transport network structures and telecommunications network design and performance modelling, as well as network costs and ROI modelling and QoS in 3G networks.

  14. Campus network security model study

    Science.gov (United States)

    Zhang, Yong-ku; Song, Li-ren

    2011-12-01

    Campus network security is growing importance, Design a very effective defense hacker attacks, viruses, data theft, and internal defense system, is the focus of the study in this paper. This paper compared the firewall; IDS based on the integrated, then design of a campus network security model, and detail the specific implementation principle.

  15. The QKD network: model and routing scheme

    Science.gov (United States)

    Yang, Chao; Zhang, Hongqi; Su, Jinhai

    2017-11-01

    Quantum key distribution (QKD) technology can establish unconditional secure keys between two communicating parties. Although this technology has some inherent constraints, such as the distance and point-to-point mode limits, building a QKD network with multiple point-to-point QKD devices can overcome these constraints. Considering the development level of current technology, the trust relaying QKD network is the first choice to build a practical QKD network. However, the previous research didn't address a routing method on the trust relaying QKD network in detail. This paper focuses on the routing issues, builds a model of the trust relaying QKD network for easily analysing and understanding this network, and proposes a dynamical routing scheme for this network. From the viewpoint of designing a dynamical routing scheme in classical network, the proposed scheme consists of three components: a Hello protocol helping share the network topology information, a routing algorithm to select a set of suitable paths and establish the routing table and a link state update mechanism helping keep the routing table newly. Experiments and evaluation demonstrates the validity and effectiveness of the proposed routing scheme.

  16. Mathematical model for spreading dynamics of social network worms

    International Nuclear Information System (INIS)

    Sun, Xin; Liu, Yan-Heng; Han, Jia-Wei; Liu, Xue-Jie; Li, Bin; Li, Jin

    2012-01-01

    In this paper, a mathematical model for social network worm spreading is presented from the viewpoint of social engineering. This model consists of two submodels. Firstly, a human behavior model based on game theory is suggested for modeling and predicting the expected behaviors of a network user encountering malicious messages. The game situation models the actions of a user under the condition that the system may be infected at the time of opening a malicious message. Secondly, a social network accessing model is proposed to characterize the dynamics of network users, by which the number of online susceptible users can be determined at each time step. Several simulation experiments are carried out on artificial social networks. The results show that (1) the proposed mathematical model can well describe the spreading dynamics of social network worms; (2) weighted network topology greatly affects the spread of worms; (3) worms spread even faster on hybrid social networks

  17. Generalized Network Psychometrics : Combining Network and Latent Variable Models

    NARCIS (Netherlands)

    Epskamp, S.; Rhemtulla, M.; Borsboom, D.

    2017-01-01

    We introduce the network model as a formal psychometric model, conceptualizing the covariance between psychometric indicators as resulting from pairwise interactions between observable variables in a network structure. This contrasts with standard psychometric models, in which the covariance between

  18. Neural network modeling of emotion

    Science.gov (United States)

    Levine, Daniel S.

    2007-03-01

    This article reviews the history and development of computational neural network modeling of cognitive and behavioral processes that involve emotion. The exposition starts with models of classical conditioning dating from the early 1970s. Then it proceeds toward models of interactions between emotion and attention. Then models of emotional influences on decision making are reviewed, including some speculative (not and not yet simulated) models of the evolution of decision rules. Through the late 1980s, the neural networks developed to model emotional processes were mainly embodiments of significant functional principles motivated by psychological data. In the last two decades, network models of these processes have become much more detailed in their incorporation of known physiological properties of specific brain regions, while preserving many of the psychological principles from the earlier models. Most network models of emotional processes so far have dealt with positive and negative emotion in general, rather than specific emotions such as fear, joy, sadness, and anger. But a later section of this article reviews a few models relevant to specific emotions: one family of models of auditory fear conditioning in rats, and one model of induced pleasure enhancing creativity in humans. Then models of emotional disorders are reviewed. The article concludes with philosophical statements about the essential contributions of emotion to intelligent behavior and the importance of quantitative theories and models to the interdisciplinary enterprise of understanding the interactions of emotion, cognition, and behavior.

  19. Modeling of fluctuating reaction networks

    International Nuclear Information System (INIS)

    Lipshtat, A.; Biham, O.

    2004-01-01

    Full Text:Various dynamical systems are organized as reaction networks, where the population size of one component affects the populations of all its neighbors. Such networks can be found in interstellar surface chemistry, cell biology, thin film growth and other systems. I cases where the populations of reactive species are large, the network can be modeled by rate equations which provide all reaction rates within mean field approximation. However, in small systems that are partitioned into sub-micron size, these populations strongly fluctuate. Under these conditions rate equations fail and the master equation is needed for modeling these reactions. However, the number of equations in the master equation grows exponentially with the number of reactive species, severely limiting its feasibility for complex networks. Here we present a method which dramatically reduces the number of equations, thus enabling the incorporation of the master equation in complex reaction networks. The method is examplified in the context of reaction network on dust grains. Its applicability for genetic networks will be discussed. 1. Efficient simulations of gas-grain chemistry in interstellar clouds. Azi Lipshtat and Ofer Biham, Phys. Rev. Lett. 93 (2004), 170601. 2. Modeling of negative autoregulated genetic networks in single cells. Azi Lipshtat, Hagai B. Perets, Nathalie Q. Balaban and Ofer Biham, Gene: evolutionary genomics (2004), In press

  20. Worst-case optimal approximation algorithms for maximizing triplet consistency within phylogenetic networks

    NARCIS (Netherlands)

    J. Byrka (Jaroslaw); K.T. Huber; S.M. Kelk (Steven); P. Gawrychowski

    2009-01-01

    htmlabstractThe study of phylogenetic networks is of great interest to computational evolutionary biology and numerous different types of such structures are known. This article addresses the following question concerning rooted versions of phylogenetic networks. What is the maximum value of pset

  1. A Consistent Fuzzy Preference Relations Based ANP Model for R&D Project Selection

    Directory of Open Access Journals (Sweden)

    Chia-Hua Cheng

    2017-08-01

    Full Text Available In today’s rapidly changing economy, technology companies have to make decisions on research and development (R&D projects investment on a routine bases with such decisions having a direct impact on that company’s profitability, sustainability and future growth. Companies seeking profitable opportunities for investment and project selection must consider many factors such as resource limitations and differences in assessment, with consideration of both qualitative and quantitative criteria. Often, differences in perception by the various stakeholders hinder the attainment of a consensus of opinion and coordination efforts. Thus, in this study, a hybrid model is developed for the consideration of the complex criteria taking into account the different opinions of the various stakeholders who often come from different departments within the company and have different opinions about which direction to take. The decision-making trial and evaluation laboratory (DEMATEL approach is used to convert the cause and effect relations representing the criteria into a visual network structure. A consistent fuzzy preference relations based analytic network process (CFPR-ANP method is developed to calculate the preference-weights of the criteria based on the derived network structure. The CFPR-ANP is an improvement over the original analytic network process (ANP method in that it reduces the problem of inconsistency as well as the number of pairwise comparisons. The combined complex proportional assessment (COPRAS-G method is applied with fuzzy grey relations to resolve conflicts arising from differences in information and opinions provided by the different stakeholders about the selection of the most suitable R&D projects. This novel combination approach is then used to assist an international brand-name company to prioritize projects and make project decisions that will maximize returns and ensure sustainability for the company.

  2. Modeling Multistandard Wireless Networks in OPNET

    DEFF Research Database (Denmark)

    Zakrzewska, Anna; Berger, Michael Stübert; Ruepp, Sarah Renée

    2011-01-01

    Future wireless communication is emerging towards one heterogeneous platform. In this new environment wireless access will be provided by multiple radio technologies that are cooperating and complementing one another. The paper investigates the possibilities of developing such a multistandard sys...... system using OPNET Modeler. A network model consisting of LTE interworking with WLAN and WiMAX is considered from the radio resource management perspective. In particular, implementing a joint packet scheduler across multiple systems is discussed more in detail....

  3. Consistency in Estimation and Model Selection of Dynamic Panel Data Models with Fixed Effects

    Directory of Open Access Journals (Sweden)

    Guangjie Li

    2015-07-01

    Full Text Available We examine the relationship between consistent parameter estimation and model selection for autoregressive panel data models with fixed effects. We find that the transformation of fixed effects proposed by Lancaster (2002 does not necessarily lead to consistent estimation of common parameters when some true exogenous regressors are excluded. We propose a data dependent way to specify the prior of the autoregressive coefficient and argue for comparing different model specifications before parameter estimation. Model selection properties of Bayes factors and Bayesian information criterion (BIC are investigated. When model uncertainty is substantial, we recommend the use of Bayesian Model Averaging to obtain point estimators with lower root mean squared errors (RMSE. We also study the implications of different levels of inclusion probabilities by simulations.

  4. A Network-Based Impact Measure for Propagated Losses in a Supply Chain Network Consisting of Resilient Components

    Directory of Open Access Journals (Sweden)

    Jesus Felix Bayta Valenzuela

    2018-01-01

    Full Text Available The topology of a supply chain network affects the impacts of disruptions in it. We formulate a network-based measure of the impact of a disruption loss in a supply chain propagating downstream from an originating node. The measure takes into account the loss profile of the originating node, the structure of the supply network, and the resilience of the network components. We obtain an analytical expression for the impact measure under a beta-distributed initial loss (generalizable to any continuous distribution supported on the interval 0,1, under a breakthrough scenario (in which a fraction of the initial production loss reaches a focal company downstream as opposed to containment upstream or at the originating point. Furthermore, we obtain a closed-form solution for a supply chain network with a k-ary tree topology; a numerical study is performed for a scale-free network and a random network. Our proposed approach enables the evaluation of potential losses for a focal company considering its supply chain network structure, which may help the company to plan or redesign a robust and resilient network in response to different types of disruptions.

  5. MRI Study on the Functional and Spatial Consistency of Resting State-Related Independent Components of the Brain Network

    Energy Technology Data Exchange (ETDEWEB)

    Jeong, Bum Seok [Korea Advanced Institute of Science and Technology, Daejeon (Korea, Republic of); Choi, Jee Wook [Daejeon St. Mary' s Hospital, The Catholic University of Korea College of Medicine, Daejeon (Korea, Republic of); Kim, Ji Woong [College of Medical Science, Konyang University, Daejeon(Korea, Republic of)

    2012-06-15

    Resting-state networks (RSNs), including the default mode network (DMN), have been considered as markers of brain status such as consciousness, developmental change, and treatment effects. The consistency of functional connectivity among RSNs has not been fully explored, especially among resting-state-related independent components (RSICs). This resting-state fMRI study addressed the consistency of functional connectivity among RSICs as well as their spatial consistency between 'at day 1' and 'after 4 weeks' in 13 healthy volunteers. We found that most RSICs, especially the DMN, are reproducible across time, whereas some RSICs were variable in either their spatial characteristics or their functional connectivity. Relatively low spatial consistency was found in the basal ganglia, a parietal region of left frontoparietal network, and the supplementary motor area. The functional connectivity between two independent components, the bilateral angular/supramarginal gyri/intraparietal lobule and bilateral middle temporal/occipital gyri, was decreased across time regardless of the correlation analysis method employed, (Pearson's or partial correlation). RSICs showing variable consistency are different between spatial characteristics and functional connectivity. To understand the brain as a dynamic network, we recommend further investigation of both changes in the activation of specific regions and the modulation of functional connectivity in the brain network.

  6. MRI Study on the Functional and Spatial Consistency of Resting State-Related Independent Components of the Brain Network

    International Nuclear Information System (INIS)

    Jeong, Bum Seok; Choi, Jee Wook; Kim, Ji Woong

    2012-01-01

    Resting-state networks (RSNs), including the default mode network (DMN), have been considered as markers of brain status such as consciousness, developmental change, and treatment effects. The consistency of functional connectivity among RSNs has not been fully explored, especially among resting-state-related independent components (RSICs). This resting-state fMRI study addressed the consistency of functional connectivity among RSICs as well as their spatial consistency between 'at day 1' and 'after 4 weeks' in 13 healthy volunteers. We found that most RSICs, especially the DMN, are reproducible across time, whereas some RSICs were variable in either their spatial characteristics or their functional connectivity. Relatively low spatial consistency was found in the basal ganglia, a parietal region of left frontoparietal network, and the supplementary motor area. The functional connectivity between two independent components, the bilateral angular/supramarginal gyri/intraparietal lobule and bilateral middle temporal/occipital gyri, was decreased across time regardless of the correlation analysis method employed, (Pearson's or partial correlation). RSICs showing variable consistency are different between spatial characteristics and functional connectivity. To understand the brain as a dynamic network, we recommend further investigation of both changes in the activation of specific regions and the modulation of functional connectivity in the brain network.

  7. Bayesian network modelling of upper gastrointestinal bleeding

    Science.gov (United States)

    Aisha, Nazziwa; Shohaimi, Shamarina; Adam, Mohd Bakri

    2013-09-01

    Bayesian networks are graphical probabilistic models that represent causal and other relationships between domain variables. In the context of medical decision making, these models have been explored to help in medical diagnosis and prognosis. In this paper, we discuss the Bayesian network formalism in building medical support systems and we learn a tree augmented naive Bayes Network (TAN) from gastrointestinal bleeding data. The accuracy of the TAN in classifying the source of gastrointestinal bleeding into upper or lower source is obtained. The TAN achieves a high classification accuracy of 86% and an area under curve of 92%. A sensitivity analysis of the model shows relatively high levels of entropy reduction for color of the stool, history of gastrointestinal bleeding, consistency and the ratio of blood urea nitrogen to creatinine. The TAN facilitates the identification of the source of GIB and requires further validation.

  8. An Analysis of Weakly Consistent Replication Systems in an Active Distributed Network

    OpenAIRE

    Amit Chougule; Pravin Ghewari

    2011-01-01

    With the sudden increase in heterogeneity and distribution of data in wide-area networks, more flexible, efficient and autonomous approaches for management and data distribution are needed. In recent years, the proliferation of inter-networks and distributed applications has increased the demand for geographically-distributed replicated databases. The architecture of Bayou provides features that address the needs of database storage of world-wide applications. Key is the use of weak consisten...

  9. Aggregated wind power plant models consisting of IEC wind turbine models

    DEFF Research Database (Denmark)

    Altin, Müfit; Göksu, Ömer; Hansen, Anca Daniela

    2015-01-01

    The common practice regarding the modelling of large generation components has been to make use of models representing the performance of the individual components with a required level of accuracy and details. Owing to the rapid increase of wind power plants comprising large number of wind...... turbines, parameters and models to represent each individual wind turbine in detail makes it necessary to develop aggregated wind power plant models considering the simulation time for power system stability studies. In this paper, aggregated wind power plant models consisting of the IEC 61400-27 variable...... speed wind turbine models (type 3 and type 4) with a power plant controller is presented. The performance of the detailed benchmark wind power plant model and the aggregated model are compared by means of simulations for the specified test cases. Consequently, the results are summarized and discussed...

  10. Phenomenological network models: Lessons for epilepsy surgery.

    Science.gov (United States)

    Hebbink, Jurgen; Meijer, Hil; Huiskamp, Geertjan; van Gils, Stephan; Leijten, Frans

    2017-10-01

    The current opinion in epilepsy surgery is that successful surgery is about removing pathological cortex in the anatomic sense. This contrasts with recent developments in epilepsy research, where epilepsy is seen as a network disease. Computational models offer a framework to investigate the influence of networks, as well as local tissue properties, and to explore alternative resection strategies. Here we study, using such a model, the influence of connections on seizures and how this might change our traditional views of epilepsy surgery. We use a simple network model consisting of four interconnected neuronal populations. One of these populations can be made hyperexcitable, modeling a pathological region of cortex. Using model simulations, the effect of surgery on the seizure rate is studied. We find that removal of the hyperexcitable population is, in most cases, not the best approach to reduce the seizure rate. Removal of normal populations located at a crucial spot in the network, the "driver," is typically more effective in reducing seizure rate. This work strengthens the idea that network structure and connections may be more important than localizing the pathological node. This can explain why lesionectomy may not always be sufficient. © 2017 The Authors. Epilepsia published by Wiley Periodicals, Inc. on behalf of International League Against Epilepsy.

  11. Functional connectivity modeling of consistent cortico-striatal degeneration in Huntington's disease

    Directory of Open Access Journals (Sweden)

    Imis Dogan

    2015-01-01

    Full Text Available Huntington's disease (HD is a progressive neurodegenerative disorder characterized by a complex neuropsychiatric phenotype. In a recent meta-analysis we identified core regions of consistent neurodegeneration in premanifest HD in the striatum and middle occipital gyrus (MOG. For early manifest HD convergent evidence of atrophy was most prominent in the striatum, motor cortex (M1 and inferior frontal junction (IFJ. The aim of the present study was to functionally characterize this topography of brain atrophy and to investigate differential connectivity patterns formed by consistent cortico-striatal atrophy regions in HD. Using areas of striatal and cortical atrophy at different disease stages as seeds, we performed task-free resting-state and task-based meta-analytic connectivity modeling (MACM. MACM utilizes the large data source of the BrainMap database and identifies significant areas of above-chance co-activation with the seed-region via the activation-likelihood-estimation approach. In order to delineate functional networks formed by cortical as well as striatal atrophy regions we computed the conjunction between the co-activation profiles of striatal and cortical seeds in the premanifest and manifest stages of HD, respectively. Functional characterization of the seeds was obtained using the behavioral meta-data of BrainMap. Cortico-striatal atrophy seeds of the premanifest stage of HD showed common co-activation with a rather cognitive network including the striatum, anterior insula, lateral prefrontal, premotor, supplementary motor and parietal regions. A similar but more pronounced co-activation pattern, additionally including the medial prefrontal cortex and thalamic nuclei was found with striatal and IFJ seeds at the manifest HD stage. The striatum and M1 were functionally connected mainly to premotor and sensorimotor areas, posterior insula, putamen and thalamus. Behavioral characterization of the seeds confirmed that experiments

  12. Network model of security system

    Directory of Open Access Journals (Sweden)

    Adamczyk Piotr

    2016-01-01

    Full Text Available The article presents the concept of building a network security model and its application in the process of risk analysis. It indicates the possibility of a new definition of the role of the network models in the safety analysis. Special attention was paid to the development of the use of an algorithm describing the process of identifying the assets, vulnerability and threats in a given context. The aim of the article is to present how this algorithm reduced the complexity of the problem by eliminating from the base model these components that have no links with others component and as a result and it was possible to build a real network model corresponding to reality.

  13. Current approaches to gene regulatory network modelling

    Directory of Open Access Journals (Sweden)

    Brazma Alvis

    2007-09-01

    Full Text Available Abstract Many different approaches have been developed to model and simulate gene regulatory networks. We proposed the following categories for gene regulatory network models: network parts lists, network topology models, network control logic models, and dynamic models. Here we will describe some examples for each of these categories. We will study the topology of gene regulatory networks in yeast in more detail, comparing a direct network derived from transcription factor binding data and an indirect network derived from genome-wide expression data in mutants. Regarding the network dynamics we briefly describe discrete and continuous approaches to network modelling, then describe a hybrid model called Finite State Linear Model and demonstrate that some simple network dynamics can be simulated in this model.

  14. Constitutive modelling of composite biopolymer networks.

    Science.gov (United States)

    Fallqvist, B; Kroon, M

    2016-04-21

    The mechanical behaviour of biopolymer networks is to a large extent determined at a microstructural level where the characteristics of individual filaments and the interactions between them determine the response at a macroscopic level. Phenomena such as viscoelasticity and strain-hardening followed by strain-softening are observed experimentally in these networks, often due to microstructural changes (such as filament sliding, rupture and cross-link debonding). Further, composite structures can also be formed with vastly different mechanical properties as compared to the individual networks. In this present paper, we present a constitutive model presented in a continuum framework aimed at capturing these effects. Special care is taken to formulate thermodynamically consistent evolution laws for dissipative effects. This model, incorporating possible anisotropic network properties, is based on a strain energy function, split into an isochoric and a volumetric part. Generalisation to three dimensions is performed by numerical integration over the unit sphere. Model predictions indicate that the constitutive model is well able to predict the elastic and viscoelastic response of biological networks, and to an extent also composite structures. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. Consistent model reduction of polymer chains in solution in dissipative particle dynamics: Model description

    KAUST Repository

    Moreno Chaparro, Nicolas

    2015-06-30

    We introduce a framework for model reduction of polymer chain models for dissipative particle dynamics (DPD) simulations, where the properties governing the phase equilibria such as the characteristic size of the chain, compressibility, density, and temperature are preserved. The proposed methodology reduces the number of degrees of freedom required in traditional DPD representations to model equilibrium properties of systems with complex molecules (e.g., linear polymers). Based on geometrical considerations we explicitly account for the correlation between beads in fine-grained DPD models and consistently represent the effect of these correlations in a reduced model, in a practical and simple fashion via power laws and the consistent scaling of the simulation parameters. In order to satisfy the geometrical constraints in the reduced model we introduce bond-angle potentials that account for the changes in the chain free energy after the model reduction. Following this coarse-graining process we represent high molecular weight DPD chains (i.e., ≥200≥200 beads per chain) with a significant reduction in the number of particles required (i.e., ≥20≥20 times the original system). We show that our methodology has potential applications modeling systems of high molecular weight molecules at large scales, such as diblock copolymer and DNA.

  16. Self-consistent determination of the spike-train power spectrum in a neural network with sparse connectivity

    Directory of Open Access Journals (Sweden)

    Benjamin eDummer

    2014-09-01

    Full Text Available A major source of random variability in cortical networks is the quasi-random arrival of presynaptic action potentials from many other cells. In network studies as well as in the study of the response properties of single cells embedded in a network, synaptic background input is often approximated by Poissonian spike trains. However, the output statistics of the cells is in most cases far from being Poisson. This is inconsistent with the assumption of similar spike-train statistics for pre- and postsynaptic cells in a recurrent network. Here we tackle this problem for the popular class of integrate-and-fire neurons and study a self-consistent statistics of input and output spectra of neural spike trains. Instead of actually using a large network, we use an iterative scheme, in which we simulate a single neuron over several generations. In each of these generations, the neuron is stimulated with surrogate stochastic input that has a similar statistics as the output of the previous generation. For the surrogate input, we employ two distinct approximations: (i a superposition of renewal spike trains with the same interspike interval density as observed in the previous generation and (ii a Gaussian current with a power spectrum proportional to that observed in the previous generation. For input parameters that correspond to balanced input in the network, both the renewal and the Gaussian iteration procedure converge quickly and yield comparable results for the self-consistent spike-train power spectrum. We compare our results to large-scale simulations of a random sparsely connected network of leaky integrate-and-fire neurons (Brunel, J. Comp. Neurosci. 2000 and show that in the asynchronous regime close to a state of balanced synaptic input from the network, our iterative schemes provide excellent approximations to the autocorrelation of spike trains in the recurrent network.

  17. Target-Centric Network Modeling

    DEFF Research Database (Denmark)

    Mitchell, Dr. William L.; Clark, Dr. Robert M.

    In Target-Centric Network Modeling: Case Studies in Analyzing Complex Intelligence Issues, authors Robert Clark and William Mitchell take an entirely new approach to teaching intelligence analysis. Unlike any other book on the market, it offers case study scenarios using actual intelligence...... reporting formats, along with a tested process that facilitates the production of a wide range of analytical products for civilian, military, and hybrid intelligence environments. Readers will learn how to perform the specific actions of problem definition modeling, target network modeling......, and collaborative sharing in the process of creating a high-quality, actionable intelligence product. The case studies reflect the complexity of twenty-first century intelligence issues by dealing with multi-layered target networks that cut across political, economic, social, technological, and military issues...

  18. QSAR modelling using combined simple competitive learning networks and RBF neural networks.

    Science.gov (United States)

    Sheikhpour, R; Sarram, M A; Rezaeian, M; Sheikhpour, E

    2018-04-01

    The aim of this study was to propose a QSAR modelling approach based on the combination of simple competitive learning (SCL) networks with radial basis function (RBF) neural networks for predicting the biological activity of chemical compounds. The proposed QSAR method consisted of two phases. In the first phase, an SCL network was applied to determine the centres of an RBF neural network. In the second phase, the RBF neural network was used to predict the biological activity of various phenols and Rho kinase (ROCK) inhibitors. The predictive ability of the proposed QSAR models was evaluated and compared with other QSAR models using external validation. The results of this study showed that the proposed QSAR modelling approach leads to better performances than other models in predicting the biological activity of chemical compounds. This indicated the efficiency of simple competitive learning networks in determining the centres of RBF neural networks.

  19. A new k-epsilon model consistent with Monin-Obukhov similarity theory

    DEFF Research Database (Denmark)

    van der Laan, Paul; Kelly, Mark C.; Sørensen, Niels N.

    2017-01-01

    A new k-" model is introduced that is consistent with Monin–Obukhov similarity theory (MOST). The proposed k-" model is compared with another k-" model that was developed in an attempt to maintain inlet profiles compatible with MOST. It is shown that the previous k-" model is not consistent with ...

  20. Social network models predict movement and connectivity in ecological landscapes

    Science.gov (United States)

    Fletcher, Robert J.; Acevedo, M.A.; Reichert, Brian E.; Pias, Kyle E.; Kitchens, Wiley M.

    2011-01-01

    Network analysis is on the rise across scientific disciplines because of its ability to reveal complex, and often emergent, patterns and dynamics. Nonetheless, a growing concern in network analysis is the use of limited data for constructing networks. This concern is strikingly relevant to ecology and conservation biology, where network analysis is used to infer connectivity across landscapes. In this context, movement among patches is the crucial parameter for interpreting connectivity but because of the difficulty of collecting reliable movement data, most network analysis proceeds with only indirect information on movement across landscapes rather than using observed movement to construct networks. Statistical models developed for social networks provide promising alternatives for landscape network construction because they can leverage limited movement information to predict linkages. Using two mark-recapture datasets on individual movement and connectivity across landscapes, we test whether commonly used network constructions for interpreting connectivity can predict actual linkages and network structure, and we contrast these approaches to social network models. We find that currently applied network constructions for assessing connectivity consistently, and substantially, overpredict actual connectivity, resulting in considerable overestimation of metapopulation lifetime. Furthermore, social network models provide accurate predictions of network structure, and can do so with remarkably limited data on movement. Social network models offer a flexible and powerful way for not only understanding the factors influencing connectivity but also for providing more reliable estimates of connectivity and metapopulation persistence in the face of limited data.

  1. Social network models predict movement and connectivity in ecological landscapes.

    Science.gov (United States)

    Fletcher, Robert J; Acevedo, Miguel A; Reichert, Brian E; Pias, Kyle E; Kitchens, Wiley M

    2011-11-29

    Network analysis is on the rise across scientific disciplines because of its ability to reveal complex, and often emergent, patterns and dynamics. Nonetheless, a growing concern in network analysis is the use of limited data for constructing networks. This concern is strikingly relevant to ecology and conservation biology, where network analysis is used to infer connectivity across landscapes. In this context, movement among patches is the crucial parameter for interpreting connectivity but because of the difficulty of collecting reliable movement data, most network analysis proceeds with only indirect information on movement across landscapes rather than using observed movement to construct networks. Statistical models developed for social networks provide promising alternatives for landscape network construction because they can leverage limited movement information to predict linkages. Using two mark-recapture datasets on individual movement and connectivity across landscapes, we test whether commonly used network constructions for interpreting connectivity can predict actual linkages and network structure, and we contrast these approaches to social network models. We find that currently applied network constructions for assessing connectivity consistently, and substantially, overpredict actual connectivity, resulting in considerable overestimation of metapopulation lifetime. Furthermore, social network models provide accurate predictions of network structure, and can do so with remarkably limited data on movement. Social network models offer a flexible and powerful way for not only understanding the factors influencing connectivity but also for providing more reliable estimates of connectivity and metapopulation persistence in the face of limited data.

  2. A Complex Network Approach to Distributional Semantic Models.

    Directory of Open Access Journals (Sweden)

    Akira Utsumi

    Full Text Available A number of studies on network analysis have focused on language networks based on free word association, which reflects human lexical knowledge, and have demonstrated the small-world and scale-free properties in the word association network. Nevertheless, there have been very few attempts at applying network analysis to distributional semantic models, despite the fact that these models have been studied extensively as computational or cognitive models of human lexical knowledge. In this paper, we analyze three network properties, namely, small-world, scale-free, and hierarchical properties, of semantic networks created by distributional semantic models. We demonstrate that the created networks generally exhibit the same properties as word association networks. In particular, we show that the distribution of the number of connections in these networks follows the truncated power law, which is also observed in an association network. This indicates that distributional semantic models can provide a plausible model of lexical knowledge. Additionally, the observed differences in the network properties of various implementations of distributional semantic models are consistently explained or predicted by considering the intrinsic semantic features of a word-context matrix and the functions of matrix weighting and smoothing. Furthermore, to simulate a semantic network with the observed network properties, we propose a new growing network model based on the model of Steyvers and Tenenbaum. The idea underlying the proposed model is that both preferential and random attachments are required to reflect different types of semantic relations in network growth process. We demonstrate that this model provides a better explanation of network behaviors generated by distributional semantic models.

  3. Orthogonal Operation of Constitutional Dynamic Networks Consisting of DNA-Tweezer Machines.

    Science.gov (United States)

    Yue, Liang; Wang, Shan; Cecconello, Alessandro; Lehn, Jean-Marie; Willner, Itamar

    2017-12-26

    Overexpression or down-regulation of cellular processes are often controlled by dynamic chemical networks. Bioinspired by nature, we introduce constitutional dynamic networks (CDNs) as systems that emulate the principle of the nature processes. The CDNs comprise dynamically interconvertible equilibrated constituents that respond to external triggers by adapting the composition of the dynamic mixture to the energetic stabilization of the constituents. We introduce a nucleic acid-based CDN that includes four interconvertible and mechanically triggered tweezers, AA', BB', AB' and BA', existing in closed, closed, open, and open configurations, respectively. By subjecting the CDN to auxiliary triggers, the guided stabilization of one of the network constituents dictates the dynamic reconfiguration of the structures of the tweezers constituents. The orthogonal and reversible operations of the CDN DNA tweezers are demonstrated, using T-A·T triplex or K + -stabilized G-quadruplex as structural motifs that control the stabilities of the constituents. The implications of the study rest on the possible applications of input-guided CDN assemblies for sensing, logic gate operations, and programmed activation of molecular machines.

  4. Toward a self-consistent and unitary reaction network for big bang nucleosynthesis

    Energy Technology Data Exchange (ETDEWEB)

    Paris, Mark W.; Brown, Lowell S.; Hale, Gerald M.; Hayes-Sterbenz, Anna C.; Jungman, Gerard; Kawano, Toshihiko, E-mail: mparis@lanl.gov [Los Alamos National Laboratory, Los Alamos, New Mexico (United States); Fuller, George M.; Grohs, Evan B. [Department of Physics, University of California, San Diego, La Jolla, CA (United States); Kunieda, Satoshi [Nuclear Data Center, Japan Atomic Energy Agency, Tokai-mura Naka-gun, Ibaraki (Japan)

    2014-07-01

    Unitarity, the mathematical expression of the conservation of probability in multichannel reactions, is an essential ingredient in the development of accurate nuclear reaction networks appropriate for nucleosynthesis in a variety of environments. We describe our ongoing program to develop a 'unitary reaction network' for the big-bang nucleosynthesis environment and look at an example of the need and power of unitary parametrizations of nuclear scattering and reaction data. Recent attention has been focused on the possible role of the {sup 9}B compound nuclear system in the resonant destruction of {sup 7}Li during primordial nucleosynthesis. We have studied reactions in the {sup 9}B compound system with a multichannel, two-body unitary R-matrix code (EDA) using the known elastic and reaction data, in a four-channel treatment. The data include elastic {sup 6}Li({sup 3}He,{sup 3}He){sup 6}Li differential cross sections from 0.7 to 2.0 MeV, integrated reaction cross sections for energies from 0.7 to 5.0 MeV for {sup 6}Li({sup 3}He,p){sup 8}Be* and from 0.4 to 5.0 MeV for the {sup 6}Li({sup 3}He,γ){sup 7}Be reaction. Capture data have been added to the previous analysis with integrated cross section measurements from 0.7 to 0.825 MeV for {sup 6}Li({sup 3}He,γ){sup 9}B. The resulting resonance parameters are compared with tabulated values from TUNL Nuclear Data Group analyses. Previously unidentified resonances are noted and the relevance of this analysis and a unitary reaction network for big-bang nucleosynthesis are emphasized. (author)

  5. Toward a self-consistent and unitary reaction network for big bang nucleosynthesis

    International Nuclear Information System (INIS)

    Paris, Mark W.; Brown, Lowell S.; Hale, Gerald M.; Hayes-Sterbenz, Anna C.; Jungman, Gerard; Kawano, Toshihiko; Fuller, George M.; Grohs, Evan B.; Kunieda, Satoshi

    2014-01-01

    Unitarity, the mathematical expression of the conservation of probability in multichannel reactions, is an essential ingredient in the development of accurate nuclear reaction networks appropriate for nucleosynthesis in a variety of environments. We describe our ongoing program to develop a 'unitary reaction network' for the big-bang nucleosynthesis environment and look at an example of the need and power of unitary parametrizations of nuclear scattering and reaction data. Recent attention has been focused on the possible role of the 9 B compound nuclear system in the resonant destruction of 7 Li during primordial nucleosynthesis. We have studied reactions in the 9 B compound system with a multichannel, two-body unitary R-matrix code (EDA) using the known elastic and reaction data, in a four-channel treatment. The data include elastic 6 Li( 3 He, 3 He) 6 Li differential cross sections from 0.7 to 2.0 MeV, integrated reaction cross sections for energies from 0.7 to 5.0 MeV for 6 Li( 3 He,p) 8 Be* and from 0.4 to 5.0 MeV for the 6 Li( 3 He,γ) 7 Be reaction. Capture data have been added to the previous analysis with integrated cross section measurements from 0.7 to 0.825 MeV for 6 Li( 3 He,γ) 9 B. The resulting resonance parameters are compared with tabulated values from TUNL Nuclear Data Group analyses. Previously unidentified resonances are noted and the relevance of this analysis and a unitary reaction network for big-bang nucleosynthesis are emphasized. (author)

  6. Reconstruction of Consistent 3d CAD Models from Point Cloud Data Using a Priori CAD Models

    Science.gov (United States)

    Bey, A.; Chaine, R.; Marc, R.; Thibault, G.; Akkouche, S.

    2011-09-01

    We address the reconstruction of 3D CAD models from point cloud data acquired in industrial environments, using a pre-existing 3D model as an initial estimate of the scene to be processed. Indeed, this prior knowledge can be used to drive the reconstruction so as to generate an accurate 3D model matching the point cloud. We more particularly focus our work on the cylindrical parts of the 3D models. We propose to state the problem in a probabilistic framework: we have to search for the 3D model which maximizes some probability taking several constraints into account, such as the relevancy with respect to the point cloud and the a priori 3D model, and the consistency of the reconstructed model. The resulting optimization problem can then be handled using a stochastic exploration of the solution space, based on the random insertion of elements in the configuration under construction, coupled with a greedy management of the conflicts which efficiently improves the configuration at each step. We show that this approach provides reliable reconstructed 3D models by presenting some results on industrial data sets.

  7. A Networks Approach to Modeling Enzymatic Reactions.

    Science.gov (United States)

    Imhof, P

    2016-01-01

    Modeling enzymatic reactions is a demanding task due to the complexity of the system, the many degrees of freedom involved and the complex, chemical, and conformational transitions associated with the reaction. Consequently, enzymatic reactions are not determined by precisely one reaction pathway. Hence, it is beneficial to obtain a comprehensive picture of possible reaction paths and competing mechanisms. By combining individually generated intermediate states and chemical transition steps a network of such pathways can be constructed. Transition networks are a discretized representation of a potential energy landscape consisting of a multitude of reaction pathways connecting the end states of the reaction. The graph structure of the network allows an easy identification of the energetically most favorable pathways as well as a number of alternative routes. © 2016 Elsevier Inc. All rights reserved.

  8. Disruption of a regulatory network consisting of neutrophils and platelets fosters persisting inflammation in rheumatic diseases

    Directory of Open Access Journals (Sweden)

    Norma eMaugeri

    2016-05-01

    Full Text Available A network of cellular interactions that involve blood leukocytes and platelets maintains vessel homeostasis. It plays a critical role in the response to invading microbes by recruiting intravascular immunity and through the generation of Neutrophil Extracellular Traps (NETs and immunothrombosis. Moreover it enables immune cells to respond to remote chemoattractants by crossing the endothelial barrier and reaching sites of infection. Once the network operating under physiological conditions is disrupted, the reciprocal activation of cells in the blood and the vessel walls determines the vascular remodelling via inflammatory signals delivered to stem/progenitor cells. A deregulated leukocyte/mural cell interaction is an early critical event in the natural history of systemic inflammation. Despite intense efforts, the signals that initiate and sustain the immune-mediated vessel injury, or those that enforce the often-prolonged phases of clinical quiescence in patients with vasculitis, have only been partially elucidated. Here we discuss recent evidence that implicates the prototypic Damage-Associated Molecular Pattern/ alarmin, the High Mobility Group Box 1 (HMGB1 protein in systemic vasculitis and in the vascular inflammation associated to systemic sclerosis. HMGB1 could represent a player in the pathogenesis of rheumatic diseases and an attractive target for molecular interventions.

  9. Disruption of a Regulatory Network Consisting of Neutrophils and Platelets Fosters Persisting Inflammation in Rheumatic Diseases.

    Science.gov (United States)

    Maugeri, Norma; Rovere-Querini, Patrizia; Manfredi, Angelo A

    2016-01-01

    A network of cellular interactions that involve blood leukocytes and platelets maintains vessel homeostasis. It plays a critical role in the response to invading microbes by recruiting intravascular immunity and through the generation of neutrophil extracellular traps (NETs) and immunothrombosis. Moreover, it enables immune cells to respond to remote chemoattractants by crossing the endothelial barrier and reaching sites of infection. Once the network operating under physiological conditions is disrupted, the reciprocal activation of cells in the blood and the vessel walls determines the vascular remodeling via inflammatory signals delivered to stem/progenitor cells. A deregulated leukocyte/mural cell interaction is an early critical event in the natural history of systemic inflammation. Despite intense efforts, the signals that initiate and sustain the immune-mediated vessel injury, or those that enforce the often-prolonged phases of clinical quiescence in patients with vasculitis, have only been partially elucidated. Here, we discuss recent evidence that implicates the prototypic damage-associated molecular pattern/alarmin, the high mobility group box 1 (HMGB1) protein in systemic vasculitis and in the vascular inflammation associated with systemic sclerosis. HMGB1 could represent a player in the pathogenesis of rheumatic diseases and an attractive target for molecular interventions.

  10. Spatial Models and Networks of Living Systems

    DEFF Research Database (Denmark)

    Juul, Jeppe Søgaard

    When studying the dynamics of living systems, insight can often be gained by developing a mathematical model that can predict future behaviour of the system or help classify system characteristics. However, in living cells, organisms, and especially groups of interacting individuals, a large number...... variables of the system. However, this approach disregards any spatial structure of the system, which may potentially change the behaviour drastically. An alternative approach is to construct a cellular automaton with nearest neighbour interactions, or even to model the system as a complex network...... with interactions defined by network topology. In this thesis I first describe three different biological models of ageing and cancer, in which spatial structure is important for the system dynamics. I then turn to describe characteristics of ecosystems consisting of three cyclically interacting species...

  11. Continuum Model for River Networks

    Science.gov (United States)

    Giacometti, Achille; Maritan, Amos; Banavar, Jayanth R.

    1995-07-01

    The effects of erosion, avalanching, and random precipitation are captured in a simple stochastic partial differential equation for modeling the evolution of river networks. Our model leads to a self-organized structured landscape and to abstraction and piracy of the smaller tributaries as the evolution proceeds. An algebraic distribution of the average basin areas and a power law relationship between the drainage basin area and the river length are found.

  12. A last updating evolution model for online social networks

    Science.gov (United States)

    Bu, Zhan; Xia, Zhengyou; Wang, Jiandong; Zhang, Chengcui

    2013-05-01

    As information technology has advanced, people are turning to electronic media more frequently for communication, and social relationships are increasingly found on online channels. However, there is very limited knowledge about the actual evolution of the online social networks. In this paper, we propose and study a novel evolution network model with the new concept of “last updating time”, which exists in many real-life online social networks. The last updating evolution network model can maintain the robustness of scale-free networks and can improve the network reliance against intentional attacks. What is more, we also found that it has the “small-world effect”, which is the inherent property of most social networks. Simulation experiment based on this model show that the results and the real-life data are consistent, which means that our model is valid.

  13. Modelling the impact of social network on energy savings

    International Nuclear Information System (INIS)

    Du, Feng; Zhang, Jiangfeng; Li, Hailong; Yan, Jinyue; Galloway, Stuart; Lo, Kwok L.

    2016-01-01

    Highlights: • Energy saving propagation along a social network is modelled. • This model consists of a time evolving weighted directed network. • Network weights and information decay are applied in savings calculation. - Abstract: It is noted that human behaviour changes can have a significant impact on energy consumption, however, qualitative study on such an impact is still very limited, and it is necessary to develop the corresponding mathematical models to describe how much energy savings can be achieved through human engagement. In this paper a mathematical model of human behavioural dynamic interactions on a social network is derived to calculate energy savings. This model consists of a weighted directed network with time evolving information on each node. Energy savings from the whole network is expressed as mathematical expectation from probability theory. This expected energy savings model includes both direct and indirect energy savings of individuals in the network. The savings model is obtained by network weights and modified by the decay of information. Expected energy savings are calculated for cases where individuals in the social network are treated as a single information source or multiple sources. This model is tested on a social network consisting of 40 people. The results show that the strength of relations between individuals is more important to information diffusion than the number of connections individuals have. The expected energy savings of optimally chosen node can be 25.32% more than randomly chosen nodes at the end of the second month for the case of single information source in the network, and 16.96% more than random nodes for the case of multiple information sources. This illustrates that the model presented in this paper can be used to determine which individuals will have the most influence on the social network, which in turn provides a useful guide to identify targeted customers in energy efficiency technology rollout

  14. Biological transportation networks: Modeling and simulation

    KAUST Repository

    Albi, Giacomo

    2015-09-15

    We present a model for biological network formation originally introduced by Cai and Hu [Adaptation and optimization of biological transport networks, Phys. Rev. Lett. 111 (2013) 138701]. The modeling of fluid transportation (e.g., leaf venation and angiogenesis) and ion transportation networks (e.g., neural networks) is explained in detail and basic analytical features like the gradient flow structure of the fluid transportation network model and the impact of the model parameters on the geometry and topology of network formation are analyzed. We also present a numerical finite-element based discretization scheme and discuss sample cases of network formation simulations.

  15. Network modelling methods for FMRI.

    Science.gov (United States)

    Smith, Stephen M; Miller, Karla L; Salimi-Khorshidi, Gholamreza; Webster, Matthew; Beckmann, Christian F; Nichols, Thomas E; Ramsey, Joseph D; Woolrich, Mark W

    2011-01-15

    There is great interest in estimating brain "networks" from FMRI data. This is often attempted by identifying a set of functional "nodes" (e.g., spatial ROIs or ICA maps) and then conducting a connectivity analysis between the nodes, based on the FMRI timeseries associated with the nodes. Analysis methods range from very simple measures that consider just two nodes at a time (e.g., correlation between two nodes' timeseries) to sophisticated approaches that consider all nodes simultaneously and estimate one global network model (e.g., Bayes net models). Many different methods are being used in the literature, but almost none has been carefully validated or compared for use on FMRI timeseries data. In this work we generate rich, realistic simulated FMRI data for a wide range of underlying networks, experimental protocols and problematic confounds in the data, in order to compare different connectivity estimation approaches. Our results show that in general correlation-based approaches can be quite successful, methods based on higher-order statistics are less sensitive, and lag-based approaches perform very poorly. More specifically: there are several methods that can give high sensitivity to network connection detection on good quality FMRI data, in particular, partial correlation, regularised inverse covariance estimation and several Bayes net methods; however, accurate estimation of connection directionality is more difficult to achieve, though Patel's τ can be reasonably successful. With respect to the various confounds added to the data, the most striking result was that the use of functionally inaccurate ROIs (when defining the network nodes and extracting their associated timeseries) is extremely damaging to network estimation; hence, results derived from inappropriate ROI definition (such as via structural atlases) should be regarded with great caution. Copyright © 2010 Elsevier Inc. All rights reserved.

  16. Adjoint-consistent formulations of slip models for coupled electroosmotic flow systems

    KAUST Repository

    Garg, Vikram V; Prudhomme, Serge; van der Zee, Kris G; Carey, Graham F

    2014-01-01

    Models based on the Helmholtz `slip' approximation are often used for the simulation of electroosmotic flows. The objectives of this paper are to construct adjoint-consistent formulations of such models, and to develop adjoint

  17. Research on the model of home networking

    Science.gov (United States)

    Yun, Xiang; Feng, Xiancheng

    2007-11-01

    It is the research hotspot of current broadband network to combine voice service, data service and broadband audio-video service by IP protocol to transport various real time and mutual services to terminal users (home). Home Networking is a new kind of network and application technology which can provide various services. Home networking is called as Digital Home Network. It means that PC, home entertainment equipment, home appliances, Home wirings, security, illumination system were communicated with each other by some composing network technology, constitute a networking internal home, and connect with WAN by home gateway. It is a new network technology and application technology, and can provide many kinds of services inside home or between homes. Currently, home networking can be divided into three kinds: Information equipment, Home appliances, Communication equipment. Equipment inside home networking can exchange information with outer networking by home gateway, this information communication is bidirectional, user can get information and service which provided by public networking by using home networking internal equipment through home gateway connecting public network, meantime, also can get information and resource to control the internal equipment which provided by home networking internal equipment. Based on the general network model of home networking, there are four functional entities inside home networking: HA, HB, HC, and HD. (1) HA (Home Access) - home networking connects function entity; (2) HB (Home Bridge) Home networking bridge connects function entity; (3) HC (Home Client) - Home networking client function entity; (4) HD (Home Device) - decoder function entity. There are many physical ways to implement four function entities. Based on theses four functional entities, there are reference model of physical layer, reference model of link layer, reference model of IP layer and application reference model of high layer. In the future home network

  18. Mathematical Modelling Plant Signalling Networks

    KAUST Repository

    Muraro, D.

    2013-01-01

    During the last two decades, molecular genetic studies and the completion of the sequencing of the Arabidopsis thaliana genome have increased knowledge of hormonal regulation in plants. These signal transduction pathways act in concert through gene regulatory and signalling networks whose main components have begun to be elucidated. Our understanding of the resulting cellular processes is hindered by the complex, and sometimes counter-intuitive, dynamics of the networks, which may be interconnected through feedback controls and cross-regulation. Mathematical modelling provides a valuable tool to investigate such dynamics and to perform in silico experiments that may not be easily carried out in a laboratory. In this article, we firstly review general methods for modelling gene and signalling networks and their application in plants. We then describe specific models of hormonal perception and cross-talk in plants. This mathematical analysis of sub-cellular molecular mechanisms paves the way for more comprehensive modelling studies of hormonal transport and signalling in a multi-scale setting. © EDP Sciences, 2013.

  19. Growth of cortical neuronal network in vitro: Modeling and analysis

    International Nuclear Information System (INIS)

    Lai, P.-Y.; Jia, L. C.; Chan, C. K.

    2006-01-01

    We present a detailed analysis and theoretical growth models to account for recent experimental data on the growth of cortical neuronal networks in vitro [Phys. Rev. Lett. 93, 088101 (2004)]. The experimentally observed synchronized firing frequency of a well-connected neuronal network is shown to be proportional to the mean network connectivity. The growth of the network is consistent with the model of an early enhanced growth of connection, but followed by a retarded growth once the synchronized cluster is formed. Microscopic models with dominant excluded volume interactions are consistent with the observed exponential decay of the mean connection probability as a function of the mean network connectivity. The biological implications of the growth model are also discussed

  20. Energy modelling in sensor networks

    Science.gov (United States)

    Schmidt, D.; Krämer, M.; Kuhn, T.; Wehn, N.

    2007-06-01

    Wireless sensor networks are one of the key enabling technologies for the vision of ambient intelligence. Energy resources for sensor nodes are very scarce. A key challenge is the design of energy efficient communication protocols. Models of the energy consumption are needed to accurately simulate the efficiency of a protocol or application design, and can also be used for automatic energy optimizations in a model driven design process. We propose a novel methodology to create models for sensor nodes based on few simple measurements. In a case study the methodology was used to create models for MICAz nodes. The models were integrated in a simulation environment as well as in a SDL runtime framework of a model driven design process. Measurements on a test application that was created automatically from an SDL specification showed an 80% reduction in energy consumption compared to an implementation without power saving strategies.

  1. Biological transportation networks: Modeling and simulation

    KAUST Repository

    Albi, Giacomo; Artina, Marco; Foransier, Massimo; Markowich, Peter A.

    2015-01-01

    We present a model for biological network formation originally introduced by Cai and Hu [Adaptation and optimization of biological transport networks, Phys. Rev. Lett. 111 (2013) 138701]. The modeling of fluid transportation (e.g., leaf venation

  2. Brand Marketing Model on Social Networks

    Directory of Open Access Journals (Sweden)

    Jolita Jezukevičiūtė

    2014-04-01

    Full Text Available The paper analyzes the brand and its marketing solutions onsocial networks. This analysis led to the creation of improvedbrand marketing model on social networks, which will contributeto the rapid and cheap organization brand recognition, increasecompetitive advantage and enhance consumer loyalty. Therefore,the brand and a variety of social networks are becoming a hotresearch area for brand marketing model on social networks.The world‘s most successful brand marketing models exploratoryanalysis of a single case study revealed a brand marketingsocial networking tools that affect consumers the most. Basedon information analysis and methodological studies, develop abrand marketing model on social networks.

  3. Analysis of organizational culture with social network models

    OpenAIRE

    Titov, S.

    2015-01-01

    Organizational culture is nowadays an object of numerous scientific papers. However, only marginal part of existing research attempts to use the formal models of organizational cultures. The lack of organizational culture models significantly limits the further research in this area and restricts the application of the theory to practice of organizational culture change projects. The article consists of general views on potential application of network models and social network analysis to th...

  4. Entanglement effects in model polymer networks

    Science.gov (United States)

    Everaers, R.; Kremer, K.

    The influence of topological constraints on the local dynamics in cross-linked polymer melts and their contribution to the elastic properties of rubber elastic systems are a long standing problem in statistical mechanics. Polymer networks with diamond lattice connectivity (Everaers and Kremer 1995, Everaers and Kremer 1996a) are idealized model systems which isolate the effect of topology conservation from other sources of quenched disorder. We study their behavior in molecular dynamics simulations under elongational strain. In our analysis we compare the measured, purely entropic shear moduli G to the predictions of statistical mechanical models of rubber elasticity, making extensive use of the microscopic structural and topological information available in computer simulations. We find (Everaers and Kremer 1995) that the classical models of rubber elasticity underestimate the true change in entropy in a deformed network significantly, because they neglect the tension along the contour of the strands which cannot relax due to entanglements (Everaers and Kremer (in preparation)). This contribution and the fluctuations in strained systems seem to be well described by the constrained mode model (Everaers 1998) which allows to treat the crossover from classical rubber elasticity to the tube model for polymer networks with increasing strand length within one transparant formalism. While this is important for the description of the effects we try to do a first quantitative step towards their explanation by topological considerations. We show (Everaers and Kremer 1996a) that for the comparatively short strand lengths of our diamond networks the topology contribution to the shear modulus is proportional to the density of entangled mesh pairs with non-zero Gauss linking number. Moreover, the prefactor can be estimated consistently within a rather simple model developed by Vologodskii et al. and by Graessley and Pearson, which is based on the definition of an entropic

  5. Advancing nucleosynthesis in self-consistent, multidimensional models of core-collapse supernovae

    International Nuclear Information System (INIS)

    Austin Harris, J.; Chertkow, M.A.; Blondin, J.M.; Pedro Marronetti; Florida Atlantic University, Boca Raton, FL

    2014-01-01

    We investigate CCSN in polar axisymmetric simulations using the multidimensional radiation hydrodynamics code CHIMERA. Computational costs have traditionally constrained the evolution of the nuclear composition in CCSN models to, at best, a 14-species α-network. However, the limited capacity of the α-network to accurately evolve detailed composition, the neutronization and the nuclear energy generation rate has fettered the ability of prior CCSN simulations to accurately reproduce the chemical abundances and energy distributions as known from observations. These deficits can be partially ameliorated by 'post-processing' with a more realistic network. Lagrangian tracer particles placed throughout the star record the temporal evolution of the initial simulation and enable the extension of the nuclear network evolution by incorporating larger systems in post-processing nucleosynthesis calculations. We present post-processing results of four ab initio axisymmetric CCSN 2D models evolved with the smaller α-network, and initiated from stellar metallicity, nonrotating progenitors of mass 12, 15, 20, and 25 M ⊙ 2 . As a test of the limitations of postprocessing, we provide preliminary results from an ongoing simulation of the 15 M ⊙ model evolved with a realistic 150 species nuclear reaction network in situ. With more accurate energy generation rates and an improved determination of the thermodynamic trajectories of the tracer particles, we can better unravel the complicated multidimensional 'mass-cut' in CCSN simulations and probe for less energetically significant nuclear processes like the νp-process and the r-process, which require still larger networks. (author)

  6. Self-consistent imbedding and the ellipsoidal model model for porous rocks

    International Nuclear Information System (INIS)

    Korringa, J.; Brown, R.J.S.; Thompson, D.D.; Runge, R.J.

    1979-01-01

    Equations are obtained for the effective elastic moduli for a model of an isotropic, heterogeneous, porous medium. The mathematical model used for computation is abstract in that it is not simply a rigorous computation for a composite medium of some idealized geometry, although the computation contains individual steps which are just that. Both the solid part and pore space are represented by ellipsoidal or spherical 'grains' or 'pores' of various sizes and shapes. The strain of each grain, caused by external forces applied to the medium, is calculated in a self-consistent imbedding (SCI) approximation, which replaces the true surrounding of any given grain or pore by an isotropic medium defined by the effective moduli to be computed. The ellipsoidal nature of the shapes allows us to use Eshelby's theoretical treatment of a single ellipsoidal inclusion in an infiinte homogeneous medium. Results are compared with the literature, and discrepancies are found with all published accounts of this problem. Deviations from the work of Wu, of Walsh, and of O'Connell and Budiansky are attributed to a substitution made by these authors which though an identity for the exact quantities involved, is only approximate in the SCI calculation. This reduces the validity of the equations to first-order effects only. Differences with the results of Kuster and Toksoez are attributed to the fact that the computation of these authors is not self-consistent in the sense used here. A result seems to be the stiffening of the medium as if the pores are held apart. For spherical grains and pores, their calculated moduli are those given by the Hashin-Shtrikman upper bounds. Our calculation reproduces, in the case of spheres, an early result of Budiansky. An additional feature of our work is that the algebra is simpler than in earlier work. We also incorporate into the theory the possibility that fluid-filled pores are interconnected

  7. A novel Direct Small World network model

    Directory of Open Access Journals (Sweden)

    LIN Tao

    2016-10-01

    Full Text Available There is a certain degree of redundancy and low efficiency of existing computer networks.This paper presents a novel Direct Small World network model in order to optimize networks.In this model,several nodes construct a regular network.Then,randomly choose and replot some nodes to generate Direct Small World network iteratively.There is no change in average distance and clustering coefficient.However,the network performance,such as hops,is improved.The experiments prove that compared to traditional small world network,the degree,average of degree centrality and average of closeness centrality are lower in Direct Small World network.This illustrates that the nodes in Direct Small World networks are closer than Watts-Strogatz small world network model.The Direct Small World can be used not only in the communication of the community information,but also in the research of epidemics.

  8. RMBNToolbox: random models for biochemical networks

    Directory of Open Access Journals (Sweden)

    Niemi Jari

    2007-05-01

    Full Text Available Abstract Background There is an increasing interest to model biochemical and cell biological networks, as well as to the computational analysis of these models. The development of analysis methodologies and related software is rapid in the field. However, the number of available models is still relatively small and the model sizes remain limited. The lack of kinetic information is usually the limiting factor for the construction of detailed simulation models. Results We present a computational toolbox for generating random biochemical network models which mimic real biochemical networks. The toolbox is called Random Models for Biochemical Networks. The toolbox works in the Matlab environment, and it makes it possible to generate various network structures, stoichiometries, kinetic laws for reactions, and parameters therein. The generation can be based on statistical rules and distributions, and more detailed information of real biochemical networks can be used in situations where it is known. The toolbox can be easily extended. The resulting network models can be exported in the format of Systems Biology Markup Language. Conclusion While more information is accumulating on biochemical networks, random networks can be used as an intermediate step towards their better understanding. Random networks make it possible to study the effects of various network characteristics to the overall behavior of the network. Moreover, the construction of artificial network models provides the ground truth data needed in the validation of various computational methods in the fields of parameter estimation and data analysis.

  9. Thermodynamically Consistent Algorithms for the Solution of Phase-Field Models

    KAUST Repository

    Vignal, Philippe

    2016-01-01

    of thermodynamically consistent algorithms for time integration of phase-field models. The first part of this thesis focuses on an energy-stable numerical strategy developed for the phase-field crystal equation. This model was put forward to model microstructure

  10. A CVAR scenario for a standard monetary model using theory-consistent expectations

    DEFF Research Database (Denmark)

    Juselius, Katarina

    2017-01-01

    A theory-consistent CVAR scenario describes a set of testable regularities capturing basic assumptions of the theoretical model. Using this concept, the paper considers a standard model for exchange rate determination and shows that all assumptions about the model's shock structure and steady...

  11. Brand Marketing Model on Social Networks

    OpenAIRE

    Jolita Jezukevičiūtė; Vida Davidavičienė

    2014-01-01

    The paper analyzes the brand and its marketing solutions onsocial networks. This analysis led to the creation of improvedbrand marketing model on social networks, which will contributeto the rapid and cheap organization brand recognition, increasecompetitive advantage and enhance consumer loyalty. Therefore,the brand and a variety of social networks are becoming a hotresearch area for brand marketing model on social networks.The world‘s most successful brand marketing models exploratoryanalys...

  12. Brand marketing model on social networks

    OpenAIRE

    Jezukevičiūtė, Jolita; Davidavičienė, Vida

    2014-01-01

    Paper analyzes the brand and its marketing solutions on social networks. This analysis led to the creation of improved brand marketing model on social networks, which will contribute to the rapid and cheap organization brand recognition, increase competitive advantage and enhance consumer loyalty. Therefore, the brand and a variety of social networks are becoming a hot research area for brand marketing model on social networks. The world‘s most successful brand marketing models exploratory an...

  13. Network Bandwidth Utilization Forecast Model on High Bandwidth Network

    Energy Technology Data Exchange (ETDEWEB)

    Yoo, Wucherl; Sim, Alex

    2014-07-07

    With the increasing number of geographically distributed scientific collaborations and the scale of the data size growth, it has become more challenging for users to achieve the best possible network performance on a shared network. We have developed a forecast model to predict expected bandwidth utilization for high-bandwidth wide area network. The forecast model can improve the efficiency of resource utilization and scheduling data movements on high-bandwidth network to accommodate ever increasing data volume for large-scale scientific data applications. Univariate model is developed with STL and ARIMA on SNMP path utilization data. Compared with traditional approach such as Box-Jenkins methodology, our forecast model reduces computation time by 83.2percent. It also shows resilience against abrupt network usage change. The accuracy of the forecast model is within the standard deviation of the monitored measurements.

  14. Network bandwidth utilization forecast model on high bandwidth networks

    Energy Technology Data Exchange (ETDEWEB)

    Yoo, Wuchert (William) [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Sim, Alex [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2015-03-30

    With the increasing number of geographically distributed scientific collaborations and the scale of the data size growth, it has become more challenging for users to achieve the best possible network performance on a shared network. We have developed a forecast model to predict expected bandwidth utilization for high-bandwidth wide area network. The forecast model can improve the efficiency of resource utilization and scheduling data movements on high-bandwidth network to accommodate ever increasing data volume for large-scale scientific data applications. Univariate model is developed with STL and ARIMA on SNMP path utilization data. Compared with traditional approach such as Box-Jenkins methodology, our forecast model reduces computation time by 83.2%. It also shows resilience against abrupt network usage change. The accuracy of the forecast model is within the standard deviation of the monitored measurements.

  15. Dynamic thermo-hydraulic model of district cooling networks

    International Nuclear Information System (INIS)

    Oppelt, Thomas; Urbaneck, Thorsten; Gross, Ulrich; Platzer, Bernd

    2016-01-01

    Highlights: • A dynamic thermo-hydraulic model for district cooling networks is presented. • The thermal modelling is based on water segment tracking (Lagrangian approach). • Thus, numerical errors and balance inaccuracies are avoided. • Verification and validation studies proved the reliability of the model. - Abstract: In the present paper, the dynamic thermo-hydraulic model ISENA is presented which can be applied for answering different questions occurring in design and operation of district cooling networks—e.g. related to economic and energy efficiency. The network model consists of a quasistatic hydraulic model and a transient thermal model based on tracking water segments through the whole network (Lagrangian method). Applying this approach, numerical errors and balance inaccuracies can be avoided which leads to a higher quality of results compared to other network models. Verification and validation calculations are presented in order to show that ISENA provides reliable results and is suitable for practical application.

  16. Behavioral Consistency of C and Verilog Programs Using Bounded Model Checking

    National Research Council Canada - National Science Library

    Clarke, Edmund; Kroening, Daniel; Yorav, Karen

    2003-01-01

    .... We describe experimental results on various reactive present an algorithm that checks behavioral consistency between an ANSI-C program and a circuit given in Verilog using Bounded Model Checking...

  17. Consistent individual differences and population plasticity in network-derived sociality: An experimental manipulation of density in a gregarious ungulate

    Science.gov (United States)

    O’Brien, Paul P.; Vander Wal, Eric

    2018-01-01

    In many taxa, individual social traits appear to be consistent across time and context, thus meeting the criteria for animal personality. How these differences are maintained in response to changes in population density is unknown, particularly in large mammals, such as ungulates. Using a behavioral reaction norm (BRN) framework, we examined how among- and within-individual variation in social connectedness, measured using social network analyses, change as a function of population density. We studied a captive herd of elk (Cervus canadensis) separated into a group of male elk and a group of female elk. Males and females were exposed to three different density treatments and we recorded social associations between individuals with proximity-detecting radio-collars fitted to elk. We constructed social networks using dyadic association data and calculated three social network metrics reflective of social connectedness: eigenvector centrality, graph strength, and degree. Elk exhibited consistent individual differences in social connectedness across densities; however, they showed little individual variation in their response to changes in density, i.e., individuals oftentimes responded plastically, but in the same manner to changes in density. Female elk had highest connectedness at an intermediate density. In contrast, male elk increased connectedness with increasing density. Whereas this may suggest that the benefits of social connectedness outweigh the costs of increased competition at higher density for males, females appear to exhibit a threshold in social benefits (e.g. predator detection and forage information). Our study illustrates the importance of viewing social connectedness as a density-dependent trait, particularly in the context of plasticity. Moreover, we highlight the need to revisit our understanding of density dependence as a population-level phenomenon by accounting for consistent individual differences not only in social connectedness, but likely

  18. Consistency, Verification, and Validation of Turbulence Models for Reynolds-Averaged Navier-Stokes Applications

    Science.gov (United States)

    Rumsey, Christopher L.

    2009-01-01

    In current practice, it is often difficult to draw firm conclusions about turbulence model accuracy when performing multi-code CFD studies ostensibly using the same model because of inconsistencies in model formulation or implementation in different codes. This paper describes an effort to improve the consistency, verification, and validation of turbulence models within the aerospace community through a website database of verification and validation cases. Some of the variants of two widely-used turbulence models are described, and two independent computer codes (one structured and one unstructured) are used in conjunction with two specific versions of these models to demonstrate consistency with grid refinement for several representative problems. Naming conventions, implementation consistency, and thorough grid resolution studies are key factors necessary for success.

  19. The Functional Segregation and Integration Model: Mixture Model Representations of Consistent and Variable Group-Level Connectivity in fMRI

    DEFF Research Database (Denmark)

    Churchill, Nathan William; Madsen, Kristoffer Hougaard; Mørup, Morten

    2016-01-01

    flexibility: they only estimate segregated structure and do not model interregional functional connectivity, nor do they account for network variability across voxels or between subjects. To address these issues, this letter develops the functional segregation and integration model (FSIM). This extension......The brain consists of specialized cortical regions that exchange information between each other, reflecting a combination of segregated (local) and integrated (distributed) processes that define brain function. Functional magnetic resonance imaging (fMRI) is widely used to characterize...... brain regions where network expression predicts subject age in the experimental data. Thus, the FSIM is effective at summarizing functional connectivity structure in group-level fMRI, with applications in modeling the relationships between network variability and behavioral/demographic variables....

  20. Self-consistent one-gluon exchange in soliton bag models

    International Nuclear Information System (INIS)

    Dodd, L.R.; Adelaide Univ.; Williams, A.G.

    1988-01-01

    The treatment of soliton bag models as two-point boundary value problems is extended to include self-consistent one-gluon exchange interactions. The colour-magnetic contribution to the nucleon-delta mass splitting is calculated self-consistently in the mean-field, one-gluon-exchange approximation for the Friedberg-Lee and Nielsen-Patkos models. Small glueball mass parameters (m GB ∝ 500 MeV) are favoured. Comparisons with previous calculations are made. (orig.)

  1. An acoustical model based monitoring network

    NARCIS (Netherlands)

    Wessels, P.W.; Basten, T.G.H.; Eerden, F.J.M. van der

    2010-01-01

    In this paper the approach for an acoustical model based monitoring network is demonstrated. This network is capable of reconstructing a noise map, based on the combination of measured sound levels and an acoustic model of the area. By pre-calculating the sound attenuation within the network the

  2. Self-consistent assessment of Englert-Schwinger model on atomic properties

    Science.gov (United States)

    Lehtomäki, Jouko; Lopez-Acevedo, Olga

    2017-12-01

    Our manuscript investigates a self-consistent solution of the statistical atom model proposed by Berthold-Georg Englert and Julian Schwinger (the ES model) and benchmarks it against atomic Kohn-Sham and two orbital-free models of the Thomas-Fermi-Dirac (TFD)-λvW family. Results show that the ES model generally offers the same accuracy as the well-known TFD-1/5 vW model; however, the ES model corrects the failure in the Pauli potential near-nucleus region. We also point to the inability of describing low-Z atoms as the foremost concern in improving the present model.

  3. Adjoint-consistent formulations of slip models for coupled electroosmotic flow systems

    KAUST Repository

    Garg, Vikram V

    2014-09-27

    Background Models based on the Helmholtz `slip\\' approximation are often used for the simulation of electroosmotic flows. The objectives of this paper are to construct adjoint-consistent formulations of such models, and to develop adjoint-based numerical tools for adaptive mesh refinement and parameter sensitivity analysis. Methods We show that the direct formulation of the `slip\\' model is adjoint inconsistent, and leads to an ill-posed adjoint problem. We propose a modified formulation of the coupled `slip\\' model, which is shown to be well-posed, and therefore automatically adjoint-consistent. Results Numerical examples are presented to illustrate the computation and use of the adjoint solution in two-dimensional microfluidics problems. Conclusions An adjoint-consistent formulation for Helmholtz `slip\\' models of electroosmotic flows has been proposed. This formulation provides adjoint solutions that can be reliably used for mesh refinement and sensitivity analysis.

  4. Requirements for UML and OWL Integration Tool for User Data Consistency Modeling and Testing

    DEFF Research Database (Denmark)

    Nytun, J. P.; Jensen, Christian Søndergaard; Oleshchuk, V. A.

    2003-01-01

    The amount of data available on the Internet is continuously increasing, consequentially there is a growing need for tools that help to analyse the data. Testing of consistency among data received from different sources is made difficult by the number of different languages and schemas being used....... In this paper we analyze requirements for a tool that support integration of UML models and ontologies written in languages like the W3C Web Ontology Language (OWL). The tool can be used in the following way: after loading two legacy models into the tool, the tool user connects them by inserting modeling......, an important part of this technique is attaching of OCL expressions to special boolean class attributes that we call consistency attributes. The resulting integration model can be used for automatic consistency testing of two instances of the legacy models by automatically instantiate the whole integration...

  5. Spinal Cord Injury Model System Information Network

    Science.gov (United States)

    ... the UAB-SCIMS More The UAB-SCIMS Information Network The University of Alabama at Birmingham Spinal Cord Injury Model System (UAB-SCIMS) maintains this Information Network as a resource to promote knowledge in the ...

  6. Eight challenges for network epidemic models

    Directory of Open Access Journals (Sweden)

    Lorenzo Pellis

    2015-03-01

    Full Text Available Networks offer a fertile framework for studying the spread of infection in human and animal populations. However, owing to the inherent high-dimensionality of networks themselves, modelling transmission through networks is mathematically and computationally challenging. Even the simplest network epidemic models present unanswered questions. Attempts to improve the practical usefulness of network models by including realistic features of contact networks and of host–pathogen biology (e.g. waning immunity have made some progress, but robust analytical results remain scarce. A more general theory is needed to understand the impact of network structure on the dynamics and control of infection. Here we identify a set of challenges that provide scope for active research in the field of network epidemic models.

  7. Volume of the steady-state space of financial flows in a monetary stock-flow-consistent model

    Science.gov (United States)

    Hazan, Aurélien

    2017-05-01

    We show that a steady-state stock-flow consistent macro-economic model can be represented as a Constraint Satisfaction Problem (CSP). The set of solutions is a polytope, which volume depends on the constraints applied and reveals the potential fragility of the economic circuit, with no need to study the dynamics. Several methods to compute the volume are compared, inspired by operations research methods and the analysis of metabolic networks, both exact and approximate. We also introduce a random transaction matrix, and study the particular case of linear flows with respect to money stocks.

  8. Mood-dependent integration in discourse comprehension: happy and sad moods affect consistency processing via different brain networks.

    Science.gov (United States)

    Egidi, Giovanna; Caramazza, Alfonso

    2014-12-01

    According to recent research on language comprehension, the semantic features of a text are not the only determinants of whether incoming information is understood as consistent. Listeners' pre-existing affective states play a crucial role as well. The current fMRI experiment examines the effects of happy and sad moods during comprehension of consistent and inconsistent story endings, focusing on brain regions previously linked to two integration processes: inconsistency detection, evident in stronger responses to inconsistent endings, and fluent processing (accumulation), evident in stronger responses to consistent endings. The analysis evaluated whether differences in the BOLD response for consistent and inconsistent story endings correlated with self-reported mood scores after a mood induction procedure. Mood strongly affected regions previously associated with inconsistency detection. Happy mood increased sensitivity to inconsistency in regions specific for inconsistency detection (e.g., left IFG, left STS), whereas sad mood increased sensitivity to inconsistency in regions less specific for language processing (e.g., right med FG, right SFG). Mood affected more weakly regions involved in accumulation of information. These results show that mood can influence activity in areas mediating well-defined language processes, and highlight that integration is the result of context-dependent mechanisms. The finding that language comprehension can involve different networks depending on people's mood highlights the brain's ability to reorganize its functions. Copyright © 2014 Elsevier Inc. All rights reserved.

  9. SPLAI: Computational Finite Element Model for Sensor Networks

    Directory of Open Access Journals (Sweden)

    Ruzana Ishak

    2006-01-01

    Full Text Available Wireless sensor network refers to a group of sensors, linked by a wireless medium to perform distributed sensing task. The primary interest is their capability in monitoring the physical environment through the deployment of numerous tiny, intelligent, wireless networked sensor nodes. Our interest consists of a sensor network, which includes a few specialized nodes called processing elements that can perform some limited computational capabilities. In this paper, we propose a model called SPLAI that allows the network to compute a finite element problem where the processing elements are modeled as the nodes in the linear triangular approximation problem. Our model also considers the case of some failures of the sensors. A simulation model to visualize this network has been developed using C++ on the Windows environment.

  10. Entropy Characterization of Random Network Models

    Directory of Open Access Journals (Sweden)

    Pedro J. Zufiria

    2017-06-01

    Full Text Available This paper elaborates on the Random Network Model (RNM as a mathematical framework for modelling and analyzing the generation of complex networks. Such framework allows the analysis of the relationship between several network characterizing features (link density, clustering coefficient, degree distribution, connectivity, etc. and entropy-based complexity measures, providing new insight on the generation and characterization of random networks. Some theoretical and computational results illustrate the utility of the proposed framework.

  11. The model of social crypto-network

    Directory of Open Access Journals (Sweden)

    Марк Миколайович Орел

    2015-06-01

    Full Text Available The article presents the theoretical model of social network with the enhanced mechanism of privacy policy. It covers the problems arising in the process of implementing the mentioned type of network. There are presented the methods of solving problems arising in the process of building the social network with privacy policy. It was built a theoretical model of social networks with enhanced information protection methods based on information and communication blocks

  12. Introducing Synchronisation in Deterministic Network Models

    DEFF Research Database (Denmark)

    Schiøler, Henrik; Jessen, Jan Jakob; Nielsen, Jens Frederik D.

    2006-01-01

    The paper addresses performance analysis for distributed real time systems through deterministic network modelling. Its main contribution is the introduction and analysis of models for synchronisation between tasks and/or network elements. Typical patterns of synchronisation are presented leading...... to the suggestion of suitable network models. An existing model for flow control is presented and an inherent weakness is revealed and remedied. Examples are given and numerically analysed through deterministic network modelling. Results are presented to highlight the properties of the suggested models...

  13. Estimating long-term volatility parameters for market-consistent models

    African Journals Online (AJOL)

    Contemporary actuarial and accounting practices (APN 110 in the South African context) require the use of market-consistent models for the valuation of embedded investment derivatives. These models have to be calibrated with accurate and up-to-date market data. Arguably, the most important variable in the valuation of ...

  14. New geometric design consistency model based on operating speed profiles for road safety evaluation.

    Science.gov (United States)

    Camacho-Torregrosa, Francisco J; Pérez-Zuriaga, Ana M; Campoy-Ungría, J Manuel; García-García, Alfredo

    2013-12-01

    To assist in the on-going effort to reduce road fatalities as much as possible, this paper presents a new methodology to evaluate road safety in both the design and redesign stages of two-lane rural highways. This methodology is based on the analysis of road geometric design consistency, a value which will be a surrogate measure of the safety level of the two-lane rural road segment. The consistency model presented in this paper is based on the consideration of continuous operating speed profiles. The models used for their construction were obtained by using an innovative GPS-data collection method that is based on continuous operating speed profiles recorded from individual drivers. This new methodology allowed the researchers to observe the actual behavior of drivers and to develop more accurate operating speed models than was previously possible with spot-speed data collection, thereby enabling a more accurate approximation to the real phenomenon and thus a better consistency measurement. Operating speed profiles were built for 33 Spanish two-lane rural road segments, and several consistency measurements based on the global and local operating speed were checked. The final consistency model takes into account not only the global dispersion of the operating speed, but also some indexes that consider both local speed decelerations and speeds over posted speeds as well. For the development of the consistency model, the crash frequency for each study site was considered, which allowed estimating the number of crashes on a road segment by means of the calculation of its geometric design consistency. Consequently, the presented consistency evaluation method is a promising innovative tool that can be used as a surrogate measure to estimate the safety of a road segment. Copyright © 2012 Elsevier Ltd. All rights reserved.

  15. Model parameter updating using Bayesian networks

    International Nuclear Information System (INIS)

    Treml, C.A.; Ross, Timothy J.

    2004-01-01

    This paper outlines a model parameter updating technique for a new method of model validation using a modified model reference adaptive control (MRAC) framework with Bayesian Networks (BNs). The model parameter updating within this method is generic in the sense that the model/simulation to be validated is treated as a black box. It must have updateable parameters to which its outputs are sensitive, and those outputs must have metrics that can be compared to that of the model reference, i.e., experimental data. Furthermore, no assumptions are made about the statistics of the model parameter uncertainty, only upper and lower bounds need to be specified. This method is designed for situations where a model is not intended to predict a complete point-by-point time domain description of the item/system behavior; rather, there are specific points, features, or events of interest that need to be predicted. These specific points are compared to the model reference derived from actual experimental data. The logic for updating the model parameters to match the model reference is formed via a BN. The nodes of this BN consist of updateable model input parameters and the specific output values or features of interest. Each time the model is executed, the input/output pairs are used to adapt the conditional probabilities of the BN. Each iteration further refines the inferred model parameters to produce the desired model output. After parameter updating is complete and model inputs are inferred, reliabilities for the model output are supplied. Finally, this method is applied to a simulation of a resonance control cooling system for a prototype coupled cavity linac. The results are compared to experimental data.

  16. Bayesian Network Webserver: a comprehensive tool for biological network modeling.

    Science.gov (United States)

    Ziebarth, Jesse D; Bhattacharya, Anindya; Cui, Yan

    2013-11-01

    The Bayesian Network Webserver (BNW) is a platform for comprehensive network modeling of systems genetics and other biological datasets. It allows users to quickly and seamlessly upload a dataset, learn the structure of the network model that best explains the data and use the model to understand relationships between network variables. Many datasets, including those used to create genetic network models, contain both discrete (e.g. genotype) and continuous (e.g. gene expression traits) variables, and BNW allows for modeling hybrid datasets. Users of BNW can incorporate prior knowledge during structure learning through an easy-to-use structural constraint interface. After structure learning, users are immediately presented with an interactive network model, which can be used to make testable hypotheses about network relationships. BNW, including a downloadable structure learning package, is available at http://compbio.uthsc.edu/BNW. (The BNW interface for adding structural constraints uses HTML5 features that are not supported by current version of Internet Explorer. We suggest using other browsers (e.g. Google Chrome or Mozilla Firefox) when accessing BNW). ycui2@uthsc.edu. Supplementary data are available at Bioinformatics online.

  17. Self-consistent model calculations of the ordered S-matrix and the cylinder correction

    International Nuclear Information System (INIS)

    Millan, J.

    1977-11-01

    The multiperipheral ordered bootstrap of Rosenzweig and Veneziano is studied by using dual triple Regge couplings exhibiting the required threshold behavior. In the interval -0.5 less than or equal to t less than or equal to 0.8 GeV 2 self-consistent reggeon couplings and propagators are obtained for values of Regge slopes and intercepts consistent with the physical values for the leading natural-parity Regge trajectories. Cylinder effects on planar pole positions and couplings are calculated. By use of an unsymmetrical planar π--rho reggeon loop model, self-consistent solutions are obtained for the unnatural-parity mesons in the interval -0.5 less than or equal to t less than or equal to 0.6 GeV 2 . The effects of other Regge poles being neglected, the model gives a value of the π--eta splitting consistent with experiment. 24 figures, 1 table, 25 references

  18. Precommitted Investment Strategy versus Time-Consistent Investment Strategy for a General Risk Model with Diffusion

    Directory of Open Access Journals (Sweden)

    Lidong Zhang

    2014-01-01

    Full Text Available We mainly study a general risk model and investigate the precommitted strategy and the time-consistent strategy under mean-variance criterion, respectively. A lagrange method is proposed to derive the precommitted investment strategy. Meanwhile from the game theoretical perspective, we find the time-consistent investment strategy by solving the extended Hamilton-Jacobi-Bellman equations. By comparing the precommitted strategy with the time-consistent strategy, we find that the company under the time-consistent strategy has to give up the better current utility in order to keep a consistent satisfaction over the whole time horizon. Furthermore, we theoretically and numerically provide the effect of the parameters on these two optimal strategies and the corresponding value functions.

  19. Conceptual and methodological biases in network models.

    Science.gov (United States)

    Lamm, Ehud

    2009-10-01

    Many natural and biological phenomena can be depicted as networks. Theoretical and empirical analyses of networks have become prevalent. I discuss theoretical biases involved in the delineation of biological networks. The network perspective is shown to dissolve the distinction between regulatory architecture and regulatory state, consistent with the theoretical impossibility of distinguishing a priori between "program" and "data." The evolutionary significance of the dynamics of trans-generational and interorganism regulatory networks is explored and implications are presented for understanding the evolution of the biological categories development-heredity, plasticity-evolvability, and epigenetic-genetic.

  20. Statistical mechanics of stochastic neural networks: Relationship between the self-consistent signal-to-noise analysis, Thouless-Anderson-Palmer equation, and replica symmetric calculation approaches

    International Nuclear Information System (INIS)

    Shiino, Masatoshi; Yamana, Michiko

    2004-01-01

    We study the statistical mechanical aspects of stochastic analog neural network models for associative memory with correlation type learning. We take three approaches to derive the set of the order parameter equations for investigating statistical properties of retrieval states: the self-consistent signal-to-noise analysis (SCSNA), the Thouless-Anderson-Palmer (TAP) equation, and the replica symmetric calculation. On the basis of the cavity method the SCSNA can be generalized to deal with stochastic networks. We establish the close connection between the TAP equation and the SCSNA to elucidate the relationship between the Onsager reaction term of the TAP equation and the output proportional term of the SCSNA that appear in the expressions for the local fields

  1. A non-parametric consistency test of the ΛCDM model with Planck CMB data

    Energy Technology Data Exchange (ETDEWEB)

    Aghamousa, Amir; Shafieloo, Arman [Korea Astronomy and Space Science Institute, Daejeon 305-348 (Korea, Republic of); Hamann, Jan, E-mail: amir@aghamousa.com, E-mail: jan.hamann@unsw.edu.au, E-mail: shafieloo@kasi.re.kr [School of Physics, The University of New South Wales, Sydney NSW 2052 (Australia)

    2017-09-01

    Non-parametric reconstruction methods, such as Gaussian process (GP) regression, provide a model-independent way of estimating an underlying function and its uncertainty from noisy data. We demonstrate how GP-reconstruction can be used as a consistency test between a given data set and a specific model by looking for structures in the residuals of the data with respect to the model's best-fit. Applying this formalism to the Planck temperature and polarisation power spectrum measurements, we test their global consistency with the predictions of the base ΛCDM model. Our results do not show any serious inconsistencies, lending further support to the interpretation of the base ΛCDM model as cosmology's gold standard.

  2. Development of a Model for Dynamic Recrystallization Consistent with the Second Derivative Criterion

    Directory of Open Access Journals (Sweden)

    Muhammad Imran

    2017-11-01

    Full Text Available Dynamic recrystallization (DRX processes are widely used in industrial hot working operations, not only to keep the forming forces low but also to control the microstructure and final properties of the workpiece. According to the second derivative criterion (SDC by Poliak and Jonas, the onset of DRX can be detected from an inflection point in the strain-hardening rate as a function of flow stress. Various models are available that can predict the evolution of flow stress from incipient plastic flow up to steady-state deformation in the presence of DRX. Some of these models have been implemented into finite element codes and are widely used for the design of metal forming processes, but their consistency with the SDC has not been investigated. This work identifies three sources of inconsistencies that models for DRX may exhibit. For a consistent modeling of the DRX kinetics, a new strain-hardening model for the hardening stages III to IV is proposed and combined with consistent recrystallization kinetics. The model is devised in the Kocks-Mecking space based on characteristic transition in the strain-hardening rate. A linear variation of the transition and inflection points is observed for alloy 800H at all tested temperatures and strain rates. The comparison of experimental and model results shows that the model is able to follow the course of the strain-hardening rate very precisely, such that highly accurate flow stress predictions are obtained.

  3. How to model wireless mesh networks topology

    International Nuclear Information System (INIS)

    Sanni, M L; Hashim, A A; Anwar, F; Ali, S; Ahmed, G S M

    2013-01-01

    The specification of network connectivity model or topology is the beginning of design and analysis in Computer Network researches. Wireless Mesh Networks is an autonomic network that is dynamically self-organised, self-configured while the mesh nodes establish automatic connectivity with the adjacent nodes in the relay network of wireless backbone routers. Researches in Wireless Mesh Networks range from node deployment to internetworking issues with sensor, Internet and cellular networks. These researches require modelling of relationships and interactions among nodes including technical characteristics of the links while satisfying the architectural requirements of the physical network. However, the existing topology generators model geographic topologies which constitute different architectures, thus may not be suitable in Wireless Mesh Networks scenarios. The existing methods of topology generation are explored, analysed and parameters for their characterisation are identified. Furthermore, an algorithm for the design of Wireless Mesh Networks topology based on square grid model is proposed in this paper. The performance of the topology generated is also evaluated. This research is particularly important in the generation of a close-to-real topology for ensuring relevance of design to the intended network and validity of results obtained in Wireless Mesh Networks researches

  4. Model checking mobile ad hoc networks

    NARCIS (Netherlands)

    Ghassemi, Fatemeh; Fokkink, Wan

    2016-01-01

    Modeling arbitrary connectivity changes within mobile ad hoc networks (MANETs) makes application of automated formal verification challenging. We use constrained labeled transition systems as a semantic model to represent mobility. To model check MANET protocols with respect to the underlying

  5. Self-consistent atmosphere modeling with cloud formation for low-mass stars and exoplanets

    Science.gov (United States)

    Juncher, Diana; Jørgensen, Uffe G.; Helling, Christiane

    2017-12-01

    Context. Low-mass stars and extrasolar planets have ultra-cool atmospheres where a rich chemistry occurs and clouds form. The increasing amount of spectroscopic observations for extrasolar planets requires self-consistent model atmosphere simulations to consistently include the formation processes that determine cloud formation and their feedback onto the atmosphere. Aims: Our aim is to complement the MARCS model atmosphere suit with simulations applicable to low-mass stars and exoplanets in preparation of E-ELT, JWST, PLATO and other upcoming facilities. Methods: The MARCS code calculates stellar atmosphere models, providing self-consistent solutions of the radiative transfer and the atmospheric structure and chemistry. We combine MARCS with a kinetic model that describes cloud formation in ultra-cool atmospheres (seed formation, growth/evaporation, gravitational settling, convective mixing, element depletion). Results: We present a small grid of self-consistently calculated atmosphere models for Teff = 2000-3000 K with solar initial abundances and log (g) = 4.5. Cloud formation in stellar and sub-stellar atmospheres appears for Teff day-night energy transport and no temperature inversion.

  6. A consistency assessment of coupled cohesive zone models for mixed-mode debonding problems

    Directory of Open Access Journals (Sweden)

    R. Dimitri

    2014-07-01

    Full Text Available Due to their simplicity, cohesive zone models (CZMs are very attractive to describe mixed-mode failure and debonding processes of materials and interfaces. Although a large number of coupled CZMs have been proposed, and despite the extensive related literature, little attention has been devoted to ensuring the consistency of these models for mixed-mode conditions, primarily in a thermodynamical sense. A lack of consistency may affect the local or global response of a mechanical system. This contribution deals with the consistency check for some widely used exponential and bilinear mixed-mode CZMs. The coupling effect on stresses and energy dissipation is first investigated and the path-dependance of the mixed-mode debonding work of separation is analitically evaluated. Analytical predictions are also compared with results from numerical implementations, where the interface is described with zero-thickness contact elements. A node-to-segment strategy is here adopted, which incorporates decohesion and contact within a unified framework. A new thermodynamically consistent mixed-mode CZ model based on a reformulation of the Xu-Needleman model as modified by van den Bosch et al. is finally proposed and derived by applying the Coleman and Noll procedure in accordance with the second law of thermodynamics. The model holds monolithically for loading and unloading processes, as well as for decohesion and contact, and its performance is demonstrated through suitable examples.

  7. Agent-based modeling and network dynamics

    CERN Document Server

    Namatame, Akira

    2016-01-01

    The book integrates agent-based modeling and network science. It is divided into three parts, namely, foundations, primary dynamics on and of social networks, and applications. The book begins with the network origin of agent-based models, known as cellular automata, and introduce a number of classic models, such as Schelling’s segregation model and Axelrod’s spatial game. The essence of the foundation part is the network-based agent-based models in which agents follow network-based decision rules. Under the influence of the substantial progress in network science in late 1990s, these models have been extended from using lattices into using small-world networks, scale-free networks, etc. The book also shows that the modern network science mainly driven by game-theorists and sociophysicists has inspired agent-based social scientists to develop alternative formation algorithms, known as agent-based social networks. The book reviews a number of pioneering and representative models in this family. Upon the gi...

  8. Towards an Information Model of Consistency Maintenance in Distributed Interactive Applications

    Directory of Open Access Journals (Sweden)

    Xin Zhang

    2008-01-01

    Full Text Available A novel framework to model and explore predictive contract mechanisms in distributed interactive applications (DIAs using information theory is proposed. In our model, the entity state update scheme is modelled as an information generation, encoding, and reconstruction process. Such a perspective facilitates a quantitative measurement of state fidelity loss as a result of the distribution protocol. Results from an experimental study on a first-person shooter game are used to illustrate the utility of this measurement process. We contend that our proposed model is a starting point to reframe and analyse consistency maintenance in DIAs as a problem in distributed interactive media compression.

  9. Precommitted Investment Strategy versus Time-Consistent Investment Strategy for a Dual Risk Model

    Directory of Open Access Journals (Sweden)

    Lidong Zhang

    2014-01-01

    Full Text Available We are concerned with optimal investment strategy for a dual risk model. We assume that the company can invest into a risk-free asset and a risky asset. Short-selling and borrowing money are allowed. Due to lack of iterated-expectation property, the Bellman Optimization Principle does not hold. Thus we investigate the precommitted strategy and time-consistent strategy, respectively. We take three steps to derive the precommitted investment strategy. Furthermore, the time-consistent investment strategy is also obtained by solving the extended Hamilton-Jacobi-Bellman equations. We compare the precommitted strategy with time-consistent strategy and find that these different strategies have different advantages: the former can make value function maximized at the original time t=0 and the latter strategy is time-consistent for the whole time horizon. Finally, numerical analysis is presented for our results.

  10. "A Simplified 'Benchmark” Stock-flow Consistent (SFC) Post-Keynesian Growth Model"

    OpenAIRE

    Claudio H. Dos Santos; Gennaro Zezza

    2007-01-01

    Despite being arguably one of the most active areas of research in heterodox macroeconomics, the study of the dynamic properties of stock-flow consistent (SFC) growth models of financially sophisticated economies is still in its early stages. This paper attempts to offer a contribution to this line of research by presenting a simplified Post-Keynesian SFC growth model with well-defined dynamic properties, and using it to shed light on the merits and limitations of the current heterodox SFC li...

  11. Self consistent MHD modeling of the solar wind from coronal holes with distinct geometries

    Science.gov (United States)

    Stewart, G. A.; Bravo, S.

    1995-01-01

    Utilizing an iterative scheme, a self-consistent axisymmetric MHD model for the solar wind has been developed. We use this model to evaluate the properties of the solar wind issuing from the open polar coronal hole regions of the Sun, during solar minimum. We explore the variation of solar wind parameters across the extent of the hole and we investigate how these variations are affected by the geometry of the hole and the strength of the field at the coronal base.

  12. Nonparametric Bayesian Modeling of Complex Networks

    DEFF Research Database (Denmark)

    Schmidt, Mikkel Nørgaard; Mørup, Morten

    2013-01-01

    an infinite mixture model as running example, we go through the steps of deriving the model as an infinite limit of a finite parametric model, inferring the model parameters by Markov chain Monte Carlo, and checking the model?s fit and predictive performance. We explain how advanced nonparametric models......Modeling structure in complex networks using Bayesian nonparametrics makes it possible to specify flexible model structures and infer the adequate model complexity from the observed data. This article provides a gentle introduction to nonparametric Bayesian modeling of complex networks: Using...

  13. A pedestal temperature model with self-consistent calculation of safety factor and magnetic shear

    International Nuclear Information System (INIS)

    Onjun, T; Siriburanon, T; Onjun, O

    2008-01-01

    A pedestal model based on theory-motivated models for the pedestal width and the pedestal pressure gradient is developed for the temperature at the top of the H-mode pedestal. The pedestal width model based on magnetic shear and flow shear stabilization is used in this study, where the pedestal pressure gradient is assumed to be limited by first stability of infinite n ballooning mode instability. This pedestal model is implemented in the 1.5D BALDUR integrated predictive modeling code, where the safety factor and magnetic shear are solved self-consistently in both core and pedestal regions. With the self-consistently approach for calculating safety factor and magnetic shear, the effect of bootstrap current can be correctly included in the pedestal model. The pedestal model is used to provide the boundary conditions in the simulations and the Multi-mode core transport model is used to describe the core transport. This new integrated modeling procedure of the BALDUR code is used to predict the temperature and density profiles of 26 H-mode discharges. Simulations are carried out for 13 discharges in the Joint European Torus and 13 discharges in the DIII-D tokamak. The average root-mean-square deviation between experimental data and the predicted profiles of the temperature and the density, normalized by their central values, is found to be about 14%

  14. Self-consistent approximation for muffin-tin models of random substitutional alloys with environmental disorder

    International Nuclear Information System (INIS)

    Kaplan, T.; Gray, L.J.

    1984-01-01

    The self-consistent approximation of Kaplan, Leath, Gray, and Diehl is applied to models for substitutional random alloys with muffin-tin potentials. The particular advantage of this approximation is that, in addition to including cluster scattering, the muffin-tin potentials in the alloy can depend on the occupation of the surrounding sites (i.e., environmental disorder is included)

  15. A new self-consistent model for thermodynamics of binary solutions

    Czech Academy of Sciences Publication Activity Database

    Svoboda, Jiří; Shan, Y. V.; Fischer, F. D.

    2015-01-01

    Roč. 108, NOV (2015), s. 27-30 ISSN 1359-6462 R&D Projects: GA ČR(CZ) GA14-24252S Institutional support: RVO:68081723 Keywords : Thermodynamics * Analytical methods * CALPHAD * Phase diagram * Self-consistent model Subject RIV: BJ - Thermodynamics Impact factor: 3.305, year: 2015

  16. Comment on self-consistent model of black hole formation and evaporation

    International Nuclear Information System (INIS)

    Ho, Pei-Ming

    2015-01-01

    In an earlier work, Kawai et al. proposed a model of black-hole formation and evaporation, in which the geometry of a collapsing shell of null dust is studied, including consistently the back reaction of its Hawking radiation. In this note, we illuminate the implications of their work, focusing on the resolution of the information loss paradox and the problem of the firewall.

  17. Topologically Consistent Models for Efficient Big Geo-Spatio Data Distribution

    Science.gov (United States)

    Jahn, M. W.; Bradley, P. E.; Doori, M. Al; Breunig, M.

    2017-10-01

    Geo-spatio-temporal topology models are likely to become a key concept to check the consistency of 3D (spatial space) and 4D (spatial + temporal space) models for emerging GIS applications such as subsurface reservoir modelling or the simulation of energy and water supply of mega or smart cities. Furthermore, the data management for complex models consisting of big geo-spatial data is a challenge for GIS and geo-database research. General challenges, concepts, and techniques of big geo-spatial data management are presented. In this paper we introduce a sound mathematical approach for a topologically consistent geo-spatio-temporal model based on the concept of the incidence graph. We redesign DB4GeO, our service-based geo-spatio-temporal database architecture, on the way to the parallel management of massive geo-spatial data. Approaches for a new geo-spatio-temporal and object model of DB4GeO meeting the requirements of big geo-spatial data are discussed in detail. Finally, a conclusion and outlook on our future research are given on the way to support the processing of geo-analytics and -simulations in a parallel and distributed system environment.

  18. ICFD modeling of final settlers - developing consistent and effective simulation model structures

    DEFF Research Database (Denmark)

    Plósz, Benedek G.; Guyonvarch, Estelle; Ramin, Elham

    CFD concept. The case of secondary settling tanks (SSTs) is used to demonstrate the methodological steps using the validated CFD model with the hindered-transientcompression settling velocity model by (10). Factor screening and latin hypercube sampling (LSH) are used to degenerate a 2-D axi-symmetrical CFD...... of (i) assessing different density current sub-models; (ii) implementation of a combined flocculation, hindered, transient and compression settling velocity function; and (iii) assessment of modelling the onset of transient and compression settling. Results suggest that the iCFD model developed...... the feed-layer. These scenarios were inspired by literature (1; 2; 9). As for the D0--iCFD model, values of SSRE obtained are below 1 with an average SSRE=0.206. The simulation model thus can predict the solids distribution inside the tank with a satisfactory accuracy. Averaged relative errors of 8.1 %, 3...

  19. Network structure exploration via Bayesian nonparametric models

    International Nuclear Information System (INIS)

    Chen, Y; Wang, X L; Xiang, X; Tang, B Z; Bu, J Z

    2015-01-01

    Complex networks provide a powerful mathematical representation of complex systems in nature and society. To understand complex networks, it is crucial to explore their internal structures, also called structural regularities. The task of network structure exploration is to determine how many groups there are in a complex network and how to group the nodes of the network. Most existing structure exploration methods need to specify either a group number or a certain type of structure when they are applied to a network. In the real world, however, the group number and also the certain type of structure that a network has are usually unknown in advance. To explore structural regularities in complex networks automatically, without any prior knowledge of the group number or the certain type of structure, we extend a probabilistic mixture model that can handle networks with any type of structure but needs to specify a group number using Bayesian nonparametric theory. We also propose a novel Bayesian nonparametric model, called the Bayesian nonparametric mixture (BNPM) model. Experiments conducted on a large number of networks with different structures show that the BNPM model is able to explore structural regularities in networks automatically with a stable, state-of-the-art performance. (paper)

  20. Modelling the structure of complex networks

    DEFF Research Database (Denmark)

    Herlau, Tue

    networks has been independently studied as mathematical objects in their own right. As such, there has been both an increased demand for statistical methods for complex networks as well as a quickly growing mathematical literature on the subject. In this dissertation we explore aspects of modelling complex....... The next chapters will treat some of the various symmetries, representer theorems and probabilistic structures often deployed in the modelling complex networks, the construction of sampling methods and various network models. The introductory chapters will serve to provide context for the included written...

  1. Modelling, Synthesis, and Configuration of Networks-on-Chips

    DEFF Research Database (Denmark)

    Stuart, Matthias Bo

    This thesis presents three contributions in two different areas of network-on-chip and system-on-chip research: Application modelling and identifying and solving different optimization problems related to two specific network-on-chip architectures. The contribution related to application modelling...... is an analytical method for deriving the worst-case traffic pattern caused by an application and the cache-coherence protocol in a cache-coherent shared-memory system. The contributions related to network-on-chip optimization problems consist of two parts: The development and evaluation of six heuristics...... for solving the network synthesis problem in the MANGO network-on-chip, and the identification and formalization of the ReNoC configuration problem together with three heuristics for solving it....

  2. Self-consistency in the phonon space of the particle-phonon coupling model

    Science.gov (United States)

    Tselyaev, V.; Lyutorovich, N.; Speth, J.; Reinhard, P.-G.

    2018-04-01

    In the paper the nonlinear generalization of the time blocking approximation (TBA) is presented. The TBA is one of the versions of the extended random-phase approximation (RPA) developed within the Green-function method and the particle-phonon coupling model. In the generalized version of the TBA the self-consistency principle is extended onto the phonon space of the model. The numerical examples show that this nonlinear version of the TBA leads to the convergence of results with respect to enlarging the phonon space of the model.

  3. Building functional networks of spiking model neurons.

    Science.gov (United States)

    Abbott, L F; DePasquale, Brian; Memmesheimer, Raoul-Martin

    2016-03-01

    Most of the networks used by computer scientists and many of those studied by modelers in neuroscience represent unit activities as continuous variables. Neurons, however, communicate primarily through discontinuous spiking. We review methods for transferring our ability to construct interesting networks that perform relevant tasks from the artificial continuous domain to more realistic spiking network models. These methods raise a number of issues that warrant further theoretical and experimental study.

  4. Consistency maintenance for constraint in role-based access control model

    Institute of Scientific and Technical Information of China (English)

    韩伟力; 陈刚; 尹建伟; 董金祥

    2002-01-01

    Constraint is an important aspect of role-based access control and is sometimes argued to be the principal motivation for role-based access control (RBAC). But so far few authors have discussed consistency maintenance for constraint in RBAC model. Based on researches of constraints among roles and types of inconsistency among constraints, this paper introduces corresponding formal rules, rule-based reasoning and corresponding methods to detect, avoid and resolve these inconsistencies. Finally, the paper introduces briefly the application of consistency maintenance in ZD-PDM, an enterprise-oriented product data management (PDM) system.

  5. Consistency maintenance for constraint in role-based access control model

    Institute of Scientific and Technical Information of China (English)

    韩伟力; 陈刚; 尹建伟; 董金祥

    2002-01-01

    Constraint is an important aspect of role-based access control and is sometimes argued to be the principal motivation for role-based access control (RBAC). But so far'few authors have discussed consistency maintenance for constraint in RBAC model. Based on researches of constraints among roles and types of inconsistency among constraints, this paper introduces correaponding formal rules, rulebased reasoning and corresponding methods to detect, avoid and resolve these inconsistencies. Finally,the paper introduces briefly the application of consistency maintenance in ZD-PDM, an enterprise-ori-ented product data management (PDM) system.

  6. A large deformation viscoelastic model for double-network hydrogels

    Science.gov (United States)

    Mao, Yunwei; Lin, Shaoting; Zhao, Xuanhe; Anand, Lallit

    2017-03-01

    We present a large deformation viscoelasticity model for recently synthesized double network hydrogels which consist of a covalently-crosslinked polyacrylamide network with long chains, and an ionically-crosslinked alginate network with short chains. Such double-network gels are highly stretchable and at the same time tough, because when stretched the crosslinks in the ionically-crosslinked alginate network rupture which results in distributed internal microdamage which dissipates a substantial amount of energy, while the configurational entropy of the covalently-crosslinked polyacrylamide network allows the gel to return to its original configuration after deformation. In addition to the large hysteresis during loading and unloading, these double network hydrogels also exhibit a substantial rate-sensitive response during loading, but exhibit almost no rate-sensitivity during unloading. These features of large hysteresis and asymmetric rate-sensitivity are quite different from the response of conventional hydrogels. We limit our attention to modeling the complex viscoelastic response of such hydrogels under isothermal conditions. Our model is restricted in the sense that we have limited our attention to conditions under which one might neglect any diffusion of the water in the hydrogel - as might occur when the gel has a uniform initial value of the concentration of water, and the mobility of the water molecules in the gel is low relative to the time scale of the mechanical deformation. We also do not attempt to model the final fracture of such double-network hydrogels.

  7. Port Hamiltonian modeling of Power Networks

    NARCIS (Netherlands)

    van Schaik, F.; van der Schaft, Abraham; Scherpen, Jacquelien M.A.; Zonetti, Daniele; Ortega, R

    2012-01-01

    In this talk a full nonlinear model for the power network in port–Hamiltonian framework is derived to study its stability properties. For this we use the modularity approach i.e., we first derive the models of individual components in power network as port-Hamiltonian systems and then we combine all

  8. Modelling traffic congestion using queuing networks

    Indian Academy of Sciences (India)

    Flow-density curves; uninterrupted traffic; Jackson networks. ... ness - also suffer from a big handicap vis-a-vis the Indian scenario: most of these models do .... more well-known queuing network models and onsite data, a more exact Road Cell ...

  9. Settings in Social Networks : a Measurement Model

    NARCIS (Netherlands)

    Schweinberger, Michael; Snijders, Tom A.B.

    2003-01-01

    A class of statistical models is proposed that aims to recover latent settings structures in social networks. Settings may be regarded as clusters of vertices. The measurement model is based on two assumptions. (1) The observed network is generated by hierarchically nested latent transitive

  10. Network interconnections: an architectural reference model

    NARCIS (Netherlands)

    Butscher, B.; Lenzini, L.; Morling, R.; Vissers, C.A.; Popescu-Zeletin, R.; van Sinderen, Marten J.; Heger, D.; Krueger, G.; Spaniol, O.; Zorn, W.

    1985-01-01

    One of the major problems in understanding the different approaches in interconnecting networks of different technologies is the lack of reference to a general model. The paper develops the rationales for a reference model of network interconnection and focuses on the architectural implications for

  11. Performance modeling of network data services

    Energy Technology Data Exchange (ETDEWEB)

    Haynes, R.A.; Pierson, L.G.

    1997-01-01

    Networks at major computational organizations are becoming increasingly complex. The introduction of large massively parallel computers and supercomputers with gigabyte memories are requiring greater and greater bandwidth for network data transfers to widely dispersed clients. For networks to provide adequate data transfer services to high performance computers and remote users connected to them, the networking components must be optimized from a combination of internal and external performance criteria. This paper describes research done at Sandia National Laboratories to model network data services and to visualize the flow of data from source to sink when using the data services.

  12. Continuum Modeling of Biological Network Formation

    KAUST Repository

    Albi, Giacomo; Burger, Martin; Haskovec, Jan; Markowich, Peter A.; Schlottbom, Matthias

    2017-01-01

    We present an overview of recent analytical and numerical results for the elliptic–parabolic system of partial differential equations proposed by Hu and Cai, which models the formation of biological transportation networks. The model describes

  13. Network models in economics and finance

    CERN Document Server

    Pardalos, Panos; Rassias, Themistocles

    2014-01-01

    Using network models to investigate the interconnectivity in modern economic systems allows researchers to better understand and explain some economic phenomena. This volume presents contributions by known experts and active researchers in economic and financial network modeling. Readers are provided with an understanding of the latest advances in network analysis as applied to economics, finance, corporate governance, and investments. Moreover, recent advances in market network analysis  that focus on influential techniques for market graph analysis are also examined. Young researchers will find this volume particularly useful in facilitating their introduction to this new and fascinating field. Professionals in economics, financial management, various technologies, and network analysis, will find the network models presented in this book beneficial in analyzing the interconnectivity in modern economic systems.

  14. Hybrid modeling and empirical analysis of automobile supply chain network

    Science.gov (United States)

    Sun, Jun-yan; Tang, Jian-ming; Fu, Wei-ping; Wu, Bing-ying

    2017-05-01

    Based on the connection mechanism of nodes which automatically select upstream and downstream agents, a simulation model for dynamic evolutionary process of consumer-driven automobile supply chain is established by integrating ABM and discrete modeling in the GIS-based map. Firstly, the rationality is proved by analyzing the consistency of sales and changes in various agent parameters between the simulation model and a real automobile supply chain. Second, through complex network theory, hierarchical structures of the model and relationships of networks at different levels are analyzed to calculate various characteristic parameters such as mean distance, mean clustering coefficients, and degree distributions. By doing so, it verifies that the model is a typical scale-free network and small-world network. Finally, the motion law of this model is analyzed from the perspective of complex self-adaptive systems. The chaotic state of the simulation system is verified, which suggests that this system has typical nonlinear characteristics. This model not only macroscopically illustrates the dynamic evolution of complex networks of automobile supply chain but also microcosmically reflects the business process of each agent. Moreover, the model construction and simulation of the system by means of combining CAS theory and complex networks supplies a novel method for supply chain analysis, as well as theory bases and experience for supply chain analysis of auto companies.

  15. A consistent modelling methodology for secondary settling tanks in wastewater treatment.

    Science.gov (United States)

    Bürger, Raimund; Diehl, Stefan; Nopens, Ingmar

    2011-03-01

    The aim of this contribution is partly to build consensus on a consistent modelling methodology (CMM) of complex real processes in wastewater treatment by combining classical concepts with results from applied mathematics, and partly to apply it to the clarification-thickening process in the secondary settling tank. In the CMM, the real process should be approximated by a mathematical model (process model; ordinary or partial differential equation (ODE or PDE)), which in turn is approximated by a simulation model (numerical method) implemented on a computer. These steps have often not been carried out in a correct way. The secondary settling tank was chosen as a case since this is one of the most complex processes in a wastewater treatment plant and simulation models developed decades ago have no guarantee of satisfying fundamental mathematical and physical properties. Nevertheless, such methods are still used in commercial tools to date. This particularly becomes of interest as the state-of-the-art practice is moving towards plant-wide modelling. Then all submodels interact and errors propagate through the model and severely hamper any calibration effort and, hence, the predictive purpose of the model. The CMM is described by applying it first to a simple conversion process in the biological reactor yielding an ODE solver, and then to the solid-liquid separation in the secondary settling tank, yielding a PDE solver. Time has come to incorporate established mathematical techniques into environmental engineering, and wastewater treatment modelling in particular, and to use proven reliable and consistent simulation models. Copyright © 2011 Elsevier Ltd. All rights reserved.

  16. Genetic Algorithm-Based Model Order Reduction of Aeroservoelastic Systems with Consistant States

    Science.gov (United States)

    Zhu, Jin; Wang, Yi; Pant, Kapil; Suh, Peter M.; Brenner, Martin J.

    2017-01-01

    This paper presents a model order reduction framework to construct linear parameter-varying reduced-order models of flexible aircraft for aeroservoelasticity analysis and control synthesis in broad two-dimensional flight parameter space. Genetic algorithms are used to automatically determine physical states for reduction and to generate reduced-order models at grid points within parameter space while minimizing the trial-and-error process. In addition, balanced truncation for unstable systems is used in conjunction with the congruence transformation technique to achieve locally optimal realization and weak fulfillment of state consistency across the entire parameter space. Therefore, aeroservoelasticity reduced-order models at any flight condition can be obtained simply through model interpolation. The methodology is applied to the pitch-plant model of the X-56A Multi-Use Technology Testbed currently being tested at NASA Armstrong Flight Research Center for flutter suppression and gust load alleviation. The present studies indicate that the reduced-order model with more than 12× reduction in the number of states relative to the original model is able to accurately predict system response among all input-output channels. The genetic-algorithm-guided approach exceeds manual and empirical state selection in terms of efficiency and accuracy. The interpolated aeroservoelasticity reduced order models exhibit smooth pole transition and continuously varying gains along a set of prescribed flight conditions, which verifies consistent state representation obtained by congruence transformation. The present model order reduction framework can be used by control engineers for robust aeroservoelasticity controller synthesis and novel vehicle design.

  17. Detecting consistent patterns of directional adaptation using differential selection codon models.

    Science.gov (United States)

    Parto, Sahar; Lartillot, Nicolas

    2017-06-23

    Phylogenetic codon models are often used to characterize the selective regimes acting on protein-coding sequences. Recent methodological developments have led to models explicitly accounting for the interplay between mutation and selection, by modeling the amino acid fitness landscape along the sequence. However, thus far, most of these models have assumed that the fitness landscape is constant over time. Fluctuations of the fitness landscape may often be random or depend on complex and unknown factors. However, some organisms may be subject to systematic changes in selective pressure, resulting in reproducible molecular adaptations across independent lineages subject to similar conditions. Here, we introduce a codon-based differential selection model, which aims to detect and quantify the fine-grained consistent patterns of adaptation at the protein-coding level, as a function of external conditions experienced by the organism under investigation. The model parameterizes the global mutational pressure, as well as the site- and condition-specific amino acid selective preferences. This phylogenetic model is implemented in a Bayesian MCMC framework. After validation with simulations, we applied our method to a dataset of HIV sequences from patients with known HLA genetic background. Our differential selection model detects and characterizes differentially selected coding positions specifically associated with two different HLA alleles. Our differential selection model is able to identify consistent molecular adaptations as a function of repeated changes in the environment of the organism. These models can be applied to many other problems, ranging from viral adaptation to evolution of life-history strategies in plants or animals.

  18. A thermodynamically consistent model for granular-fluid mixtures considering pore pressure evolution and hypoplastic behavior

    Science.gov (United States)

    Hess, Julian; Wang, Yongqi

    2016-11-01

    A new mixture model for granular-fluid flows, which is thermodynamically consistent with the entropy principle, is presented. The extra pore pressure described by a pressure diffusion equation and the hypoplastic material behavior obeying a transport equation are taken into account. The model is applied to granular-fluid flows, using a closing assumption in conjunction with the dynamic fluid pressure to describe the pressure-like residual unknowns, hereby overcoming previous uncertainties in the modeling process. Besides the thermodynamically consistent modeling, numerical simulations are carried out and demonstrate physically reasonable results, including simple shear flow in order to investigate the vertical distribution of the physical quantities, and a mixture flow down an inclined plane by means of the depth-integrated model. Results presented give insight in the ability of the deduced model to capture the key characteristics of granular-fluid flows. We acknowledge the support of the Deutsche Forschungsgemeinschaft (DFG) for this work within the Project Number WA 2610/3-1.

  19. Toward a consistent modeling framework to assess multi-sectoral climate impacts.

    Science.gov (United States)

    Monier, Erwan; Paltsev, Sergey; Sokolov, Andrei; Chen, Y-H Henry; Gao, Xiang; Ejaz, Qudsia; Couzo, Evan; Schlosser, C Adam; Dutkiewicz, Stephanie; Fant, Charles; Scott, Jeffery; Kicklighter, David; Morris, Jennifer; Jacoby, Henry; Prinn, Ronald; Haigh, Martin

    2018-02-13

    Efforts to estimate the physical and economic impacts of future climate change face substantial challenges. To enrich the currently popular approaches to impact analysis-which involve evaluation of a damage function or multi-model comparisons based on a limited number of standardized scenarios-we propose integrating a geospatially resolved physical representation of impacts into a coupled human-Earth system modeling framework. Large internationally coordinated exercises cannot easily respond to new policy targets and the implementation of standard scenarios across models, institutions and research communities can yield inconsistent estimates. Here, we argue for a shift toward the use of a self-consistent integrated modeling framework to assess climate impacts, and discuss ways the integrated assessment modeling community can move in this direction. We then demonstrate the capabilities of such a modeling framework by conducting a multi-sectoral assessment of climate impacts under a range of consistent and integrated economic and climate scenarios that are responsive to new policies and business expectations.

  20. Consistent constitutive modeling of metallic target penetration using empirical, analytical, and numerical penetration models

    Directory of Open Access Journals (Sweden)

    John (Jack P. Riegel III

    2016-04-01

    Full Text Available Historically, there has been little correlation between the material properties used in (1 empirical formulae, (2 analytical formulations, and (3 numerical models. The various regressions and models may each provide excellent agreement for the depth of penetration into semi-infinite targets. But the input parameters for the empirically based procedures may have little in common with either the analytical model or the numerical model. This paper builds on previous work by Riegel and Anderson (2014 to show how the Effective Flow Stress (EFS strength model, based on empirical data, can be used as the average flow stress in the analytical Walker–Anderson Penetration model (WAPEN (Anderson and Walker, 1991 and how the same value may be utilized as an effective von Mises yield strength in numerical hydrocode simulations to predict the depth of penetration for eroding projectiles at impact velocities in the mechanical response regime of the materials. The method has the benefit of allowing the three techniques (empirical, analytical, and numerical to work in tandem. The empirical method can be used for many shot line calculations, but more advanced analytical or numerical models can be employed when necessary to address specific geometries such as edge effects or layering that are not treated by the simpler methods. Developing complete constitutive relationships for a material can be costly. If the only concern is depth of penetration, such a level of detail may not be required. The effective flow stress can be determined from a small set of depth of penetration experiments in many cases, especially for long penetrators such as the L/D = 10 ones considered here, making it a very practical approach. In the process of performing this effort, the authors considered numerical simulations by other researchers based on the same set of experimental data that the authors used for their empirical and analytical assessment. The goals were to establish a

  1. Synergistic effects in threshold models on networks

    Science.gov (United States)

    Juul, Jonas S.; Porter, Mason A.

    2018-01-01

    Network structure can have a significant impact on the propagation of diseases, memes, and information on social networks. Different types of spreading processes (and other dynamical processes) are affected by network architecture in different ways, and it is important to develop tractable models of spreading processes on networks to explore such issues. In this paper, we incorporate the idea of synergy into a two-state ("active" or "passive") threshold model of social influence on networks. Our model's update rule is deterministic, and the influence of each meme-carrying (i.e., active) neighbor can—depending on a parameter—either be enhanced or inhibited by an amount that depends on the number of active neighbors of a node. Such a synergistic system models social behavior in which the willingness to adopt either accelerates or saturates in a way that depends on the number of neighbors who have adopted that behavior. We illustrate that our model's synergy parameter has a crucial effect on system dynamics, as it determines whether degree-k nodes are possible or impossible to activate. We simulate synergistic meme spreading on both random-graph models and networks constructed from empirical data. Using a heterogeneous mean-field approximation, which we derive under the assumption that a network is locally tree-like, we are able to determine which synergy-parameter values allow degree-k nodes to be activated for many networks and for a broad family of synergistic models.

  2. Gossip spread in social network Models

    Science.gov (United States)

    Johansson, Tobias

    2017-04-01

    Gossip almost inevitably arises in real social networks. In this article we investigate the relationship between the number of friends of a person and limits on how far gossip about that person can spread in the network. How far gossip travels in a network depends on two sets of factors: (a) factors determining gossip transmission from one person to the next and (b) factors determining network topology. For a simple model where gossip is spread among people who know the victim it is known that a standard scale-free network model produces a non-monotonic relationship between number of friends and expected relative spread of gossip, a pattern that is also observed in real networks (Lind et al., 2007). Here, we study gossip spread in two social network models (Toivonen et al., 2006; Vázquez, 2003) by exploring the parameter space of both models and fitting them to a real Facebook data set. Both models can produce the non-monotonic relationship of real networks more accurately than a standard scale-free model while also exhibiting more realistic variability in gossip spread. Of the two models, the one given in Vázquez (2003) best captures both the expected values and variability of gossip spread.

  3. A semi-nonparametric mixture model for selecting functionally consistent proteins.

    Science.gov (United States)

    Yu, Lianbo; Doerge, Rw

    2010-09-28

    High-throughput technologies have led to a new era of proteomics. Although protein microarray experiments are becoming more common place there are a variety of experimental and statistical issues that have yet to be addressed, and that will carry over to new high-throughput technologies unless they are investigated. One of the largest of these challenges is the selection of functionally consistent proteins. We present a novel semi-nonparametric mixture model for classifying proteins as consistent or inconsistent while controlling the false discovery rate and the false non-discovery rate. The performance of the proposed approach is compared to current methods via simulation under a variety of experimental conditions. We provide a statistical method for selecting functionally consistent proteins in the context of protein microarray experiments, but the proposed semi-nonparametric mixture model method can certainly be generalized to solve other mixture data problems. The main advantage of this approach is that it provides the posterior probability of consistency for each protein.

  4. Combating Weapons of Mass Destruction: Models, Complexity, and Algorithms in Complex Dynamic and Evolving Networks

    Science.gov (United States)

    2015-11-01

    Gholamreza, and Ester, Martin. “Modeling the Temporal Dynamics of Social Rating Networks Using Bidirectional Effects of Social Relations and Rating...1.1.2 β-disruptor Problems Besides the homogeneous network model consisting of uniform nodes and bidirectional links, the heterogeneous network model... neural and metabolic networks .” Biological Cybernetics 90 (2004): 311–317. 10.1007/s00422-004-0479-1. URL http://dx.doi.org/10.1007/s00422-004-0479-1 [51

  5. Model Consistent Pseudo-Observations of Precipitation and Their Use for Bias Correcting Regional Climate Models

    Directory of Open Access Journals (Sweden)

    Peter Berg

    2015-01-01

    Full Text Available Lack of suitable observational data makes bias correction of high space and time resolution regional climate models (RCM problematic. We present a method to construct pseudo-observational precipitation data bymerging a large scale constrained RCMreanalysis downscaling simulation with coarse time and space resolution observations. The large scale constraint synchronizes the inner domain solution to the driving reanalysis model, such that the simulated weather is similar to observations on a monthly time scale. Monthly biases for each single month are corrected to the corresponding month of the observational data, and applied to the finer temporal resolution of the RCM. A low-pass filter is applied to the correction factors to retain the small spatial scale information of the RCM. The method is applied to a 12.5 km RCM simulation and proven successful in producing a reliable pseudo-observational data set. Furthermore, the constructed data set is applied as reference in a quantile mapping bias correction, and is proven skillful in retaining small scale information of the RCM, while still correcting the large scale spatial bias. The proposed method allows bias correction of high resolution model simulations without changing the fine scale spatial features, i.e., retaining the very information required by many impact models.

  6. Self-consistent electronic structure of a model stage-1 graphite acceptor intercalate

    International Nuclear Information System (INIS)

    Campagnoli, G.; Tosatti, E.

    1981-04-01

    A simple but self-consistent LCAO scheme is used to study the π-electronic structure of an idealized stage-1 ordered graphite acceptor intercalate, modeled approximately on C 8 AsF 5 . The resulting non-uniform charge population within the carbon plane, band structure, optical and energy loss properties are discussed and compared with available spectroscopic evidence. The calculated total energy is used to estimate migration energy barriers, and the intercalate vibration mode frequency. (author)

  7. Implicit implementation and consistent tangent modulus of a viscoplastic model for polymers

    OpenAIRE

    ACHOUR, Nadia; CHATZIGEORGIOU, George; MERAGHNI, Fodil; CHEMISKY, Yves; FITOUSSI, Joseph

    2015-01-01

    In this work, the phenomenological viscoplastic DSGZ model (Duan et al., 2001 [13]), developed for glassy or semi-crystalline polymers, is numerically implemented in a three-dimensional framework, following an implicit formulation. The computational methodology is based on the radial return mapping algorithm. This implicit formulation leads to the definition of the consistent tangent modulus which permits the implementation in incremental micromechanical scale transition analysis. The extende...

  8. Energy flow models for the estimation of technical losses in distribution network

    International Nuclear Information System (INIS)

    Au, Mau Teng; Tan, Chin Hooi

    2013-01-01

    This paper presents energy flow models developed to estimate technical losses in distribution network. Energy flow models applied in this paper is based on input energy and peak demand of distribution network, feeder length and peak demand, transformer loading capacity, and load factor. Two case studies, an urban distribution network and a rural distribution network are used to illustrate application of the energy flow models. Results on technical losses obtained for the two distribution networks are consistent and comparable to network of similar types and characteristics. Hence, the energy flow models are suitable for practical application.

  9. Towards reproducible descriptions of neuronal network models.

    Directory of Open Access Journals (Sweden)

    Eilen Nordlie

    2009-08-01

    Full Text Available Progress in science depends on the effective exchange of ideas among scientists. New ideas can be assessed and criticized in a meaningful manner only if they are formulated precisely. This applies to simulation studies as well as to experiments and theories. But after more than 50 years of neuronal network simulations, we still lack a clear and common understanding of the role of computational models in neuroscience as well as established practices for describing network models in publications. This hinders the critical evaluation of network models as well as their re-use. We analyze here 14 research papers proposing neuronal network models of different complexity and find widely varying approaches to model descriptions, with regard to both the means of description and the ordering and placement of material. We further observe great variation in the graphical representation of networks and the notation used in equations. Based on our observations, we propose a good model description practice, composed of guidelines for the organization of publications, a checklist for model descriptions, templates for tables presenting model structure, and guidelines for diagrams of networks. The main purpose of this good practice is to trigger a debate about the communication of neuronal network models in a manner comprehensible to humans, as opposed to machine-readable model description languages. We believe that the good model description practice proposed here, together with a number of other recent initiatives on data-, model-, and software-sharing, may lead to a deeper and more fruitful exchange of ideas among computational neuroscientists in years to come. We further hope that work on standardized ways of describing--and thinking about--complex neuronal networks will lead the scientific community to a clearer understanding of high-level concepts in network dynamics, and will thus lead to deeper insights into the function of the brain.

  10. Improved Maximum Parsimony Models for Phylogenetic Networks.

    Science.gov (United States)

    Van Iersel, Leo; Jones, Mark; Scornavacca, Celine

    2018-05-01

    Phylogenetic networks are well suited to represent evolutionary histories comprising reticulate evolution. Several methods aiming at reconstructing explicit phylogenetic networks have been developed in the last two decades. In this article, we propose a new definition of maximum parsimony for phylogenetic networks that permits to model biological scenarios that cannot be modeled by the definitions currently present in the literature (namely, the "hardwired" and "softwired" parsimony). Building on this new definition, we provide several algorithmic results that lay the foundations for new parsimony-based methods for phylogenetic network reconstruction.

  11. Modeling, robust and distributed model predictive control for freeway networks

    NARCIS (Netherlands)

    Liu, S.

    2016-01-01

    In Model Predictive Control (MPC) for traffic networks, traffic models are crucial since they are used as prediction models for determining the optimal control actions. In order to reduce the computational complexity of MPC for traffic networks, macroscopic traffic models are often used instead of

  12. Modeling of axonal endoplasmic reticulum network by spastic paraplegia proteins.

    Science.gov (United States)

    Yalçın, Belgin; Zhao, Lu; Stofanko, Martin; O'Sullivan, Niamh C; Kang, Zi Han; Roost, Annika; Thomas, Matthew R; Zaessinger, Sophie; Blard, Olivier; Patto, Alex L; Sohail, Anood; Baena, Valentina; Terasaki, Mark; O'Kane, Cahir J

    2017-07-25

    Axons contain a smooth tubular endoplasmic reticulum (ER) network that is thought to be continuous with ER throughout the neuron; the mechanisms that form this axonal network are unknown. Mutations affecting reticulon or REEP proteins, with intramembrane hairpin domains that model ER membranes, cause an axon degenerative disease, hereditary spastic paraplegia (HSP). We show that Drosophila axons have a dynamic axonal ER network, which these proteins help to model. Loss of HSP hairpin proteins causes ER sheet expansion, partial loss of ER from distal motor axons, and occasional discontinuities in axonal ER. Ultrastructural analysis reveals an extensive ER network in axons, which shows larger and fewer tubules in larvae that lack reticulon and REEP proteins, consistent with loss of membrane curvature. Therefore HSP hairpin-containing proteins are required for shaping and continuity of axonal ER, thus suggesting roles for ER modeling in axon maintenance and function.

  13. Self-consistent Dark Matter simplified models with an s-channel scalar mediator

    Energy Technology Data Exchange (ETDEWEB)

    Bell, Nicole F.; Busoni, Giorgio; Sanderson, Isaac W., E-mail: n.bell@unimelb.edu.au, E-mail: giorgio.busoni@unimelb.edu.au, E-mail: isanderson@student.unimelb.edu.au [ARC Centre of Excellence for Particle Physics at the Terascale, School of Physics, The University of Melbourne, Victoria 3010 (Australia)

    2017-03-01

    We examine Simplified Models in which fermionic DM interacts with Standard Model (SM) fermions via the exchange of an s -channel scalar mediator. The single-mediator version of this model is not gauge invariant, and instead we must consider models with two scalar mediators which mix and interfere. The minimal gauge invariant scenario involves the mixing of a new singlet scalar with the Standard Model Higgs boson, and is tightly constrained. We construct two Higgs doublet model (2HDM) extensions of this scenario, where the singlet mixes with the 2nd Higgs doublet. Compared with the one doublet model, this provides greater freedom for the masses and mixing angle of the scalar mediators, and their coupling to SM fermions. We outline constraints on these models, and discuss Yukawa structures that allow enhanced couplings, yet keep potentially dangerous flavour violating processes under control. We examine the direct detection phenomenology of these models, accounting for interference of the scalar mediators, and interference of different quarks in the nucleus. Regions of parameter space consistent with direct detection measurements are determined.

  14. Self-consistent Dark Matter simplified models with an s-channel scalar mediator

    International Nuclear Information System (INIS)

    Bell, Nicole F.; Busoni, Giorgio; Sanderson, Isaac W.

    2017-01-01

    We examine Simplified Models in which fermionic DM interacts with Standard Model (SM) fermions via the exchange of an s -channel scalar mediator. The single-mediator version of this model is not gauge invariant, and instead we must consider models with two scalar mediators which mix and interfere. The minimal gauge invariant scenario involves the mixing of a new singlet scalar with the Standard Model Higgs boson, and is tightly constrained. We construct two Higgs doublet model (2HDM) extensions of this scenario, where the singlet mixes with the 2nd Higgs doublet. Compared with the one doublet model, this provides greater freedom for the masses and mixing angle of the scalar mediators, and their coupling to SM fermions. We outline constraints on these models, and discuss Yukawa structures that allow enhanced couplings, yet keep potentially dangerous flavour violating processes under control. We examine the direct detection phenomenology of these models, accounting for interference of the scalar mediators, and interference of different quarks in the nucleus. Regions of parameter space consistent with direct detection measurements are determined.

  15. Consistent framework data for modeling and formation of scenarios in the Federal Environment Office; Konsistente Rahmendaten fuer Modellierungen und Szenariobildung im Umweltbundesamt

    Energy Technology Data Exchange (ETDEWEB)

    Weimer-Jehle, Wolfgang; Wassermann, Sandra; Kosow, Hannah [Internationales Zentrum fuer Kultur- und Technikforschung an der Univ. Stuttgart (Germany). ZIRN Interdisziplinaerer Forschungsschwerpunkt Risiko und Nachhaltige Technikentwicklung

    2011-04-15

    Model-based environmental scenarios normally require multiple framework assumptions regarding future social, political and economic developments (external developments). In most cases these framework assumptions are highly uncertain. Furthermore, different external developments are not isolated from each other and their interdependences can be described by qualitative judgments only. If the internal consistency of framework assumptions is not methodologically addressed, environmental models risk to be based on inconsistent combinations of framework assumptions which do not reflect existing relations between the respective factors in an appropriate way. This report aims at demonstrating how consistent context scenarios can be developed with the help of the cross-impact balance analysis (CIB). This method allows not only for the internal consistency of framework assumptions of a single model but also for the overall consistency of framework assumptions of modeling instruments, supporting the integrated interpretation of the results of different models. In order to demonstrate the method, in a first step, ten common framework assumptions were chosen and their possible future developments until 2030 were described. In a second step, a qualitative impact network was developed based on expert elicitation. The impact network provided the basis for a qualitative but systematic analysis of the internal consistency of combinations of framework assumptions. This analysis was carried out with the CIB-method and resulted in a set of consistent context scenarios. These scenarios can be used as an informative background for defining framework assumptions for environmental models at the UBA. (orig.)

  16. A Theoretically Consistent Framework for Modelling Lagrangian Particle Deposition in Plant Canopies

    Science.gov (United States)

    Bailey, Brian N.; Stoll, Rob; Pardyjak, Eric R.

    2018-06-01

    We present a theoretically consistent framework for modelling Lagrangian particle deposition in plant canopies. The primary focus is on describing the probability of particles encountering canopy elements (i.e., potential deposition), and provides a consistent means for including the effects of imperfect deposition through any appropriate sub-model for deposition efficiency. Some aspects of the framework draw upon an analogy to radiation propagation through a turbid medium with which to develop model theory. The present method is compared against one of the most commonly used heuristic Lagrangian frameworks, namely that originally developed by Legg and Powell (Agricultural Meteorology, 1979, Vol. 20, 47-67), which is shown to be theoretically inconsistent. A recommendation is made to discontinue the use of this heuristic approach in favour of the theoretically consistent framework developed herein, which is no more difficult to apply under equivalent assumptions. The proposed framework has the additional advantage that it can be applied to arbitrary canopy geometries given readily measurable parameters describing vegetation structure.

  17. Alfven-wave particle interaction in finite-dimensional self-consistent field model

    International Nuclear Information System (INIS)

    Padhye, N.; Horton, W.

    1998-01-01

    A low-dimensional Hamiltonian model is derived for the acceleration of ions in finite amplitude Alfven waves in a finite pressure plasma sheet. The reduced low-dimensional wave-particle Hamiltonian is useful for describing the reaction of the accelerated ions on the wave amplitudes and phases through the self-consistent fields within the envelope approximation. As an example, the authors show for a single Alfven wave in the central plasma sheet of the Earth's geotail, modeled by the linear pinch geometry called the Harris sheet, the time variation of the wave amplitude during the acceleration of fast protons

  18. Interstellar turbulence model : A self-consistent coupling of plasma and neutral fluids

    International Nuclear Information System (INIS)

    Shaikh, Dastgeer; Zank, Gary P.; Pogorelov, Nikolai

    2006-01-01

    We present results of a preliminary investigation of interstellar turbulence based on a self-consistent two-dimensional fluid simulation model. Our model describes a partially ionized magnetofluid interstellar medium (ISM) that couples a neutral hydrogen fluid to a plasma through charge exchange interactions and assumes that the ISM turbulent correlation scales are much bigger than the shock characteristic length-scales, but smaller than the charge exchange mean free path length-scales. The shocks have no influence on the ISM turbulent fluctuations. We find that nonlinear interactions in coupled plasma-neutral ISM turbulence are influenced substantially by charge exchange processes

  19. Self-consistent nonlinearly polarizable shell-model dynamics for ferroelectric materials

    International Nuclear Information System (INIS)

    Mkam Tchouobiap, S.E.; Kofane, T.C.; Ngabireng, C.M.

    2002-11-01

    We investigate the dynamical properties of the polarizable shellmodel with a symmetric double Morse-type electron-ion interaction in one ionic species. A variational calculation based on the Self-Consistent Einstein Model (SCEM) shows that a theoretical ferroelectric (FE) transition temperature can be derive which demonstrates the presence of a first-order phase transition for the potassium selenate (K 2 SeO 4 ) crystal around Tc 91.5 K. Comparison of the model calculation with the experimental critical temperature yields satisfactory agreement. (author)

  20. A consistent modelling methodology for secondary settling tanks: a reliable numerical method.

    Science.gov (United States)

    Bürger, Raimund; Diehl, Stefan; Farås, Sebastian; Nopens, Ingmar; Torfs, Elena

    2013-01-01

    The consistent modelling methodology for secondary settling tanks (SSTs) leads to a partial differential equation (PDE) of nonlinear convection-diffusion type as a one-dimensional model for the solids concentration as a function of depth and time. This PDE includes a flux that depends discontinuously on spatial position modelling hindered settling and bulk flows, a singular source term describing the feed mechanism, a degenerating term accounting for sediment compressibility, and a dispersion term for turbulence. In addition, the solution itself is discontinuous. A consistent, reliable and robust numerical method that properly handles these difficulties is presented. Many constitutive relations for hindered settling, compression and dispersion can be used within the model, allowing the user to switch on and off effects of interest depending on the modelling goal as well as investigate the suitability of certain constitutive expressions. Simulations show the effect of the dispersion term on effluent suspended solids and total sludge mass in the SST. The focus is on correct implementation whereas calibration and validation are not pursued.

  1. Overlap function and Regge cut in a self-consistent multi-Regge model

    International Nuclear Information System (INIS)

    Banerjee, H.; Mallik, S.

    1977-01-01

    A self-consistent multi-Regge model with unit intercept for the input trajectory is presented. Violation of unitarity is avoided in the model by assuming the vanishing of the pomeron-pomeron-hadron vertex, as the mass of either pomeron tends to zero. The model yields an output Regge pole in the inelastic overlap function which for t>0 lies on the r.h.s. of the moving branch point in the complex J-plane, but for t<0 moves to unphysical sheets. The leading Regge-cut contribution to the forward diffraction amplitude can be negative, so that the total cross section predicted by the model attains a limiting value from below

  2. Overlap function and Regge cut in a self-consistent multi-Regge model

    Energy Technology Data Exchange (ETDEWEB)

    Banerjee, H [Saha Inst. of Nuclear Physics, Calcutta (India); Mallik, S [Bern Univ. (Switzerland). Inst. fuer Theoretische Physik

    1977-04-21

    A self-consistent multi-Regge model with unit intercept for the input trajectory is presented. Violation of unitarity is avoided in the model by assuming the vanishing of the pomeron-pomeron-hadron vertex, as the mass of either pomeron tends to zero. The model yields an output Regge pole in the inelastic overlap function which for t>0 lies on the r.h.s. of the moving branch point in the complex J-plane, but for t<0 moves to unphysical sheets. The leading Regge-cut contribution to the forward diffraction amplitude can be negative, so that the total cross section predicted by the model attains a limiting value from below.

  3. Consistent empirical physical formula construction for recoil energy distribution in HPGe detectors by using artificial neural networks

    International Nuclear Information System (INIS)

    Akkoyun, Serkan; Yildiz, Nihat

    2012-01-01

    The gamma-ray tracking technique is a highly efficient detection method in experimental nuclear structure physics. On the basis of this method, two gamma-ray tracking arrays, AGATA in Europe and GRETA in the USA, are currently being tested. The interactions of neutrons in these detectors lead to an unwanted background in the gamma-ray spectra. Thus, the interaction points of neutrons in these detectors have to be determined in the gamma-ray tracking process in order to improve photo-peak efficiencies and peak-to-total ratios of the gamma-ray peaks. In this paper, the recoil energy distributions of germanium nuclei due to inelastic scatterings of 1–5 MeV neutrons were first obtained by simulation experiments. Secondly, as a novel approach, for these highly nonlinear detector responses of recoiling germanium nuclei, consistent empirical physical formulas (EPFs) were constructed by appropriate feedforward neural networks (LFNNs). The LFNN-EPFs are of explicit mathematical functional form. Therefore, the LFNN-EPFs can be used to derive further physical functions which could be potentially relevant for the determination of neutron interactions in gamma-ray tracking process.

  4. An Ice Model That is Consistent with Composite Rheology in GIA Modelling

    Science.gov (United States)

    Huang, P.; Patrick, W.

    2017-12-01

    There are several popular approaches in constructing ice history models. One of them is mainly based on thermo-mechanical ice models with forcing or boundary conditions inferred from paleoclimate data. The second one is mainly based on the observed response of the Earth to glacial loading and unloading, a process called Glacial Isostatic Adjustment or GIA. The third approach is a hybrid version of the first and second approaches. In this presentation, we will follow the second approach which also uses geological data such as ice flow, terminal moraine data and simple ice dynamic for the ice sheet re-construction (Peltier & Andrew 1976). The global ice model ICE-6G (Peltier et al. 2015) and all its predecessors (Tushingham & Peltier 1991, Peltier 1994, 1996, 2004, Lambeck et al. 2014) are constructed this way with the assumption that mantle rheology is linear. However, high temperature creep experiments on mantle rocks show that non-linear creep laws can also operate in the mantle. Since both linear (e.g. diffusion creep) and non-linear (e.g. dislocation) creep laws can operate simultaneously in the mantle, mantle rheology is likely composite, where the total creep is the sum of both linear and onlinear creep. Preliminary GIA studies found that composite rheology can fit regional RSL observations better than that from linear rheology(e.g. van der Wal et al. 2010). The aim of this paper is to construct ice models in Laurentia and Fennoscandia using this second approach, but with composite rheology, so that its predictions can fit GIA observations such as global RSL data, land uplift rate and g-dot simultaneously in addition to geological data and simple ice dynamics. The g-dot or gravity-rate-of-change data is from the GRACE gravity mission but with the effects of hydrology removed. Our GIA model is based on the Coupled Laplace-Finite Element method as described in Wu(2004) and van der Wal et al.(2010). It is found that composite rheology generally supports a thicker

  5. Possible world based consistency learning model for clustering and classifying uncertain data.

    Science.gov (United States)

    Liu, Han; Zhang, Xianchao; Zhang, Xiaotong

    2018-06-01

    Possible world has shown to be effective for handling various types of data uncertainty in uncertain data management. However, few uncertain data clustering and classification algorithms are proposed based on possible world. Moreover, existing possible world based algorithms suffer from the following issues: (1) they deal with each possible world independently and ignore the consistency principle across different possible worlds; (2) they require the extra post-processing procedure to obtain the final result, which causes that the effectiveness highly relies on the post-processing method and the efficiency is also not very good. In this paper, we propose a novel possible world based consistency learning model for uncertain data, which can be extended both for clustering and classifying uncertain data. This model utilizes the consistency principle to learn a consensus affinity matrix for uncertain data, which can make full use of the information across different possible worlds and then improve the clustering and classification performance. Meanwhile, this model imposes a new rank constraint on the Laplacian matrix of the consensus affinity matrix, thereby ensuring that the number of connected components in the consensus affinity matrix is exactly equal to the number of classes. This also means that the clustering and classification results can be directly obtained without any post-processing procedure. Furthermore, for the clustering and classification tasks, we respectively derive the efficient optimization methods to solve the proposed model. Experimental results on real benchmark datasets and real world uncertain datasets show that the proposed model outperforms the state-of-the-art uncertain data clustering and classification algorithms in effectiveness and performs competitively in efficiency. Copyright © 2018 Elsevier Ltd. All rights reserved.

  6. Modelling of virtual production networks

    Directory of Open Access Journals (Sweden)

    2011-03-01

    Full Text Available Nowadays many companies, especially small and medium-sized enterprises (SMEs, specialize in a limited field of production. It requires forming virtual production networks of cooperating enterprises to manufacture better, faster and cheaper. Apart from that, some production orders cannot be realized, because there is not a company of sufficient production potential. In this case the virtual production networks of cooperating companies can realize these production orders. These networks have larger production capacity and many different resources. Therefore it can realize many more production orders together than each of them separately. Such organization allows for executing high quality product. The maintenance costs of production capacity and used resources are not so high. In this paper a methodology of rapid prototyping of virtual production networks is proposed. It allows to execute production orders on time considered existing logistic constraints.

  7. A Network Disruption Modeling Tool

    National Research Council Canada - National Science Library

    Leinart, James

    1998-01-01

    Given that network disruption has been identified as a military objective and C2-attack has been identified as the mechanism to accomplish this objective, a target set must be acquired and priorities...

  8. Modeling Epidemics Spreading on Social Contact Networks.

    Science.gov (United States)

    Zhang, Zhaoyang; Wang, Honggang; Wang, Chonggang; Fang, Hua

    2015-09-01

    Social contact networks and the way people interact with each other are the key factors that impact on epidemics spreading. However, it is challenging to model the behavior of epidemics based on social contact networks due to their high dynamics. Traditional models such as susceptible-infected-recovered (SIR) model ignore the crowding or protection effect and thus has some unrealistic assumption. In this paper, we consider the crowding or protection effect and develop a novel model called improved SIR model. Then, we use both deterministic and stochastic models to characterize the dynamics of epidemics on social contact networks. The results from both simulations and real data set conclude that the epidemics are more likely to outbreak on social contact networks with higher average degree. We also present some potential immunization strategies, such as random set immunization, dominating set immunization, and high degree set immunization to further prove the conclusion.

  9. Spatial Epidemic Modelling in Social Networks

    Science.gov (United States)

    Simoes, Joana Margarida

    2005-06-01

    The spread of infectious diseases is highly influenced by the structure of the underlying social network. The target of this study is not the network of acquaintances, but the social mobility network: the daily movement of people between locations, in regions. It was already shown that this kind of network exhibits small world characteristics. The model developed is agent based (ABM) and comprehends a movement model and a infection model. In the movement model, some assumptions are made about its structure and the daily movement is decomposed into four types: neighborhood, intra region, inter region and random. The model is Geographical Information Systems (GIS) based, and uses real data to define its geometry. Because it is a vector model, some optimization techniques were used to increase its efficiency.

  10. Implementing network constraints in the EMPS model

    Energy Technology Data Exchange (ETDEWEB)

    Helseth, Arild; Warland, Geir; Mo, Birger; Fosso, Olav B.

    2010-02-15

    This report concerns the coupling of detailed market and network models for long-term hydro-thermal scheduling. Currently, the EPF model (Samlast) is the only tool available for this task for actors in the Nordic market. A new prototype for solving the coupled market and network problem has been developed. The prototype is based on the EMPS model (Samkjoeringsmodellen). Results from the market model are distributed to a detailed network model, where a DC load flow detects if there are overloads on monitored lines or intersections. In case of overloads, network constraints are generated and added to the market problem. Theoretical and implementation details for the new prototype are elaborated in this report. The performance of the prototype is tested against the EPF model on a 20-area Nordic dataset. (Author)

  11. Role models for complex networks

    Science.gov (United States)

    Reichardt, J.; White, D. R.

    2007-11-01

    We present a framework for automatically decomposing (“block-modeling”) the functional classes of agents within a complex network. These classes are represented by the nodes of an image graph (“block model”) depicting the main patterns of connectivity and thus functional roles in the network. Using a first principles approach, we derive a measure for the fit of a network to any given image graph allowing objective hypothesis testing. From the properties of an optimal fit, we derive how to find the best fitting image graph directly from the network and present a criterion to avoid overfitting. The method can handle both two-mode and one-mode data, directed and undirected as well as weighted networks and allows for different types of links to be dealt with simultaneously. It is non-parametric and computationally efficient. The concepts of structural equivalence and modularity are found as special cases of our approach. We apply our method to the world trade network and analyze the roles individual countries play in the global economy.

  12. Self-consistent field model for strong electrostatic correlations and inhomogeneous dielectric media.

    Science.gov (United States)

    Ma, Manman; Xu, Zhenli

    2014-12-28

    Electrostatic correlations and variable permittivity of electrolytes are essential for exploring many chemical and physical properties of interfaces in aqueous solutions. We propose a continuum electrostatic model for the treatment of these effects in the framework of the self-consistent field theory. The model incorporates a space- or field-dependent dielectric permittivity and an excluded ion-size effect for the correlation energy. This results in a self-energy modified Poisson-Nernst-Planck or Poisson-Boltzmann equation together with state equations for the self energy and the dielectric function. We show that the ionic size is of significant importance in predicting a finite self energy for an ion in an inhomogeneous medium. Asymptotic approximation is proposed for the solution of a generalized Debye-Hückel equation, which has been shown to capture the ionic correlation and dielectric self energy. Through simulating ionic distribution surrounding a macroion, the modified self-consistent field model is shown to agree with particle-based Monte Carlo simulations. Numerical results for symmetric and asymmetric electrolytes demonstrate that the model is able to predict the charge inversion at high correlation regime in the presence of multivalent interfacial ions which is beyond the mean-field theory and also show strong effect to double layer structure due to the space- or field-dependent dielectric permittivity.

  13. Self-consistent field model for strong electrostatic correlations and inhomogeneous dielectric media

    Energy Technology Data Exchange (ETDEWEB)

    Ma, Manman, E-mail: mmm@sjtu.edu.cn; Xu, Zhenli, E-mail: xuzl@sjtu.edu.cn [Department of Mathematics, Institute of Natural Sciences, and MoE Key Laboratory of Scientific and Engineering Computing, Shanghai Jiao Tong University, Shanghai 200240 (China)

    2014-12-28

    Electrostatic correlations and variable permittivity of electrolytes are essential for exploring many chemical and physical properties of interfaces in aqueous solutions. We propose a continuum electrostatic model for the treatment of these effects in the framework of the self-consistent field theory. The model incorporates a space- or field-dependent dielectric permittivity and an excluded ion-size effect for the correlation energy. This results in a self-energy modified Poisson-Nernst-Planck or Poisson-Boltzmann equation together with state equations for the self energy and the dielectric function. We show that the ionic size is of significant importance in predicting a finite self energy for an ion in an inhomogeneous medium. Asymptotic approximation is proposed for the solution of a generalized Debye-Hückel equation, which has been shown to capture the ionic correlation and dielectric self energy. Through simulating ionic distribution surrounding a macroion, the modified self-consistent field model is shown to agree with particle-based Monte Carlo simulations. Numerical results for symmetric and asymmetric electrolytes demonstrate that the model is able to predict the charge inversion at high correlation regime in the presence of multivalent interfacial ions which is beyond the mean-field theory and also show strong effect to double layer structure due to the space- or field-dependent dielectric permittivity.

  14. Modeling the interdependent network based on two-mode networks

    Science.gov (United States)

    An, Feng; Gao, Xiangyun; Guan, Jianhe; Huang, Shupei; Liu, Qian

    2017-10-01

    Among heterogeneous networks, there exist obviously and closely interdependent linkages. Unlike existing research primarily focus on the theoretical research of physical interdependent network model. We propose a two-layer interdependent network model based on two-mode networks to explore the interdependent features in the reality. Specifically, we construct a two-layer interdependent loan network and develop several dependent features indices. The model is verified to enable us to capture the loan dependent features of listed companies based on loan behaviors and shared shareholders. Taking Chinese debit and credit market as case study, the main conclusions are: (1) only few listed companies shoulder the main capital transmission (20% listed companies occupy almost 70% dependent degree). (2) The control of these key listed companies will be more effective of avoiding the spreading of financial risks. (3) Identifying the companies with high betweenness centrality and controlling them could be helpful to monitor the financial risk spreading. (4) The capital transmission channel among Chinese financial listed companies and Chinese non-financial listed companies are relatively strong. However, under greater pressure of demand of capital transmission (70% edges failed), the transmission channel, which constructed by debit and credit behavior, will eventually collapse.

  15. Latent variable models are network models.

    Science.gov (United States)

    Molenaar, Peter C M

    2010-06-01

    Cramer et al. present an original and interesting network perspective on comorbidity and contrast this perspective with a more traditional interpretation of comorbidity in terms of latent variable theory. My commentary focuses on the relationship between the two perspectives; that is, it aims to qualify the presumed contrast between interpretations in terms of networks and latent variables.

  16. Traffic Multiresolution Modeling and Consistency Analysis of Urban Expressway Based on Asynchronous Integration Strategy

    Directory of Open Access Journals (Sweden)

    Liyan Zhang

    2017-01-01

    Full Text Available The paper studies multiresolution traffic flow simulation model of urban expressway. Firstly, compared with two-level hybrid model, three-level multiresolution hybrid model has been chosen. Then, multiresolution simulation framework and integration strategies are introduced. Thirdly, the paper proposes an urban expressway multiresolution traffic simulation model by asynchronous integration strategy based on Set Theory, which includes three submodels: macromodel, mesomodel, and micromodel. After that, the applicable conditions and derivation process of the three submodels are discussed in detail. In addition, in order to simulate and evaluate the multiresolution model, “simple simulation scenario” of North-South Elevated Expressway in Shanghai has been established. The simulation results showed the following. (1 Volume-density relationships of three submodels are unanimous with detector data. (2 When traffic density is high, macromodel has a high precision and smaller error and the dispersion of results is smaller. Compared with macromodel, simulation accuracies of micromodel and mesomodel are lower but errors are bigger. (3 Multiresolution model can simulate characteristics of traffic flow, capture traffic wave, and keep the consistency of traffic state transition. Finally, the results showed that the novel multiresolution model can have higher simulation accuracy and it is feasible and effective in the real traffic simulation scenario.

  17. Homophyly/Kinship Model: Naturally Evolving Networks

    Science.gov (United States)

    Li, Angsheng; Li, Jiankou; Pan, Yicheng; Yin, Xianchen; Yong, Xi

    2015-10-01

    It has been a challenge to understand the formation and roles of social groups or natural communities in the evolution of species, societies and real world networks. Here, we propose the hypothesis that homophyly/kinship is the intrinsic mechanism of natural communities, introduce the notion of the affinity exponent and propose the homophyly/kinship model of networks. We demonstrate that the networks of our model satisfy a number of topological, probabilistic and combinatorial properties and, in particular, that the robustness and stability of natural communities increase as the affinity exponent increases and that the reciprocity of the networks in our model decreases as the affinity exponent increases. We show that both homophyly/kinship and reciprocity are essential to the emergence of cooperation in evolutionary games and that the homophyly/kinship and reciprocity determined by the appropriate affinity exponent guarantee the emergence of cooperation in evolutionary games, verifying Darwin’s proposal that kinship and reciprocity are the means of individual fitness. We propose the new principle of structure entropy minimisation for detecting natural communities of networks and verify the functional module property and characteristic properties by a healthy tissue cell network, a citation network, some metabolic networks and a protein interaction network.

  18. A Consistent Methodology Based Parameter Estimation for a Lactic Acid Bacteria Fermentation Model

    DEFF Research Database (Denmark)

    Spann, Robert; Roca, Christophe; Kold, David

    2017-01-01

    Lactic acid bacteria are used in many industrial applications, e.g. as starter cultures in the dairy industry or as probiotics, and research on their cell production is highly required. A first principles kinetic model was developed to describe and understand the biological, physical, and chemical...... mechanisms in a lactic acid bacteria fermentation. We present here a consistent approach for a methodology based parameter estimation for a lactic acid fermentation. In the beginning, just an initial knowledge based guess of parameters was available and an initial parameter estimation of the complete set...... of parameters was performed in order to get a good model fit to the data. However, not all parameters are identifiable with the given data set and model structure. Sensitivity, identifiability, and uncertainty analysis were completed and a relevant identifiable subset of parameters was determined for a new...

  19. Consistent phase-change modeling for CO2-based heat mining operation

    DEFF Research Database (Denmark)

    Singh, Ashok Kumar; Veje, Christian

    2017-01-01

    The accuracy of mathematical modeling of phase-change phenomena is limited if a simple, less accurate equation of state completes the governing partial differential equation. However, fluid properties (such as density, dynamic viscosity and compressibility) and saturation state are calculated using...... a highly accurate, complex equation of state. This leads to unstable and inaccurate simulation as the equation of state and governing partial differential equations are mutually inconsistent. In this study, the volume-translated Peng–Robinson equation of state was used with emphasis to model the liquid......–gas phase transition with more accuracy and consistency. Calculation of fluid properties and saturation state were based on the volume translated Peng–Robinson equation of state and results verified. The present model has been applied to a scenario to simulate a CO2-based heat mining process. In this paper...

  20. Simulation of recrystallization textures in FCC materials based on a self consistent model

    International Nuclear Information System (INIS)

    Bolmaro, R.E; Roatta, A; Fourty, A.L; Signorelli, J.W; Bertinetti, M.A

    2004-01-01

    The development of re-crystallization textures in FCC polycrystalline materials has been a long lasting scientific problem. The appearance of the so-called cubic component in high stack fault energy laminated FCC materials is not an entirely understood phenomenon. This work approaches the problem using a self- consistent simulation technique of homogenization. The information on first preferential neighbors is used in the model to consider grain boundary energies and intra granular misorientations and to treat the growth of grains and the mobility of the grain boundary. The energies accumulated by deformations are taken as conducting energies of the nucleation and the later growth is statistically governed by the grain boundary energies. The model shows the correct trend for re-crystallization textures obtained from previously simulated deformation textures for high and low stack fault energy FCC materials. The model's topological representation is discussed (CW)

  1. Nonparametric test of consistency between cosmological models and multiband CMB measurements

    Energy Technology Data Exchange (ETDEWEB)

    Aghamousa, Amir [Asia Pacific Center for Theoretical Physics, Pohang, Gyeongbuk 790-784 (Korea, Republic of); Shafieloo, Arman, E-mail: amir@apctp.org, E-mail: shafieloo@kasi.re.kr [Korea Astronomy and Space Science Institute, Daejeon 305-348 (Korea, Republic of)

    2015-06-01

    We present a novel approach to test the consistency of the cosmological models with multiband CMB data using a nonparametric approach. In our analysis we calibrate the REACT (Risk Estimation and Adaptation after Coordinate Transformation) confidence levels associated with distances in function space (confidence distances) based on the Monte Carlo simulations in order to test the consistency of an assumed cosmological model with observation. To show the applicability of our algorithm, we confront Planck 2013 temperature data with concordance model of cosmology considering two different Planck spectra combination. In order to have an accurate quantitative statistical measure to compare between the data and the theoretical expectations, we calibrate REACT confidence distances and perform a bias control using many realizations of the data. Our results in this work using Planck 2013 temperature data put the best fit ΛCDM model at 95% (∼ 2σ) confidence distance from the center of the nonparametric confidence set while repeating the analysis excluding the Planck 217 × 217 GHz spectrum data, the best fit ΛCDM model shifts to 70% (∼ 1σ) confidence distance. The most prominent features in the data deviating from the best fit ΛCDM model seems to be at low multipoles  18 < ℓ < 26 at greater than 2σ, ℓ ∼ 750 at ∼1 to 2σ and ℓ ∼ 1800 at greater than 2σ level. Excluding the 217×217 GHz spectrum the feature at ℓ ∼ 1800 becomes substantially less significance at ∼1 to 2σ confidence level. Results of our analysis based on the new approach we propose in this work are in agreement with other analysis done using alternative methods.

  2. An endogenous model of the credit network

    Science.gov (United States)

    He, Jianmin; Sui, Xin; Li, Shouwei

    2016-01-01

    In this paper, an endogenous credit network model of firm-bank agents is constructed. The model describes the endogenous formation of firm-firm, firm-bank and bank-bank credit relationships. By means of simulations, the model is capable of showing some obvious similarities with empirical evidence found by other scholars: the upper-tail of firm size distribution can be well fitted with a power-law; the bank size distribution can be lognormally distributed with a power-law tail; the bank in-degrees of the interbank credit network as well as the firm-bank credit network fall into two-power-law distributions.

  3. Modelling and designing electric energy networks

    International Nuclear Information System (INIS)

    Retiere, N.

    2003-11-01

    The author gives an overview of his research works in the field of electric network modelling. After a brief overview of technological evolutions from the telegraph to the all-electric fly-by-wire aircraft, he reports and describes various works dealing with a simplified modelling of electric systems and with fractal simulation. Then, he outlines the challenges for the design of electric networks, proposes a design process, gives an overview of various design models, methods and tools, and reports an application in the design of electric networks for future jumbo jets

  4. Queueing Models for Mobile Ad Hoc Networks

    NARCIS (Netherlands)

    de Haan, Roland

    2009-01-01

    This thesis presents models for the performance analysis of a recent communication paradigm: \\emph{mobile ad hoc networking}. The objective of mobile ad hoc networking is to provide wireless connectivity between stations in a highly dynamic environment. These dynamics are driven by the mobility of

  5. Modeling GMPLS and Optical MPLS Networks

    DEFF Research Database (Denmark)

    Christiansen, Henrik Lehrmann; Wessing, Henrik

    2003-01-01

    . The MPLS concept is attractive because it can work as a unifying control structure. covering all technologies. This paper describes how a novel scheme for optical MPLS and circuit switched GMPLS based networks can incorporated in such multi-domain, MPLS-based scenarios and how it could be modeled. Network...

  6. Cyber threat model for tactical radio networks

    Science.gov (United States)

    Kurdziel, Michael T.

    2014-05-01

    The shift to a full information-centric paradigm in the battlefield has allowed ConOps to be developed that are only possible using modern network communications systems. Securing these Tactical Networks without impacting their capabilities has been a challenge. Tactical networks with fixed infrastructure have similar vulnerabilities to their commercial counterparts (although they need to be secure against adversaries with greater capabilities, resources and motivation). However, networks with mobile infrastructure components and Mobile Ad hoc Networks (MANets) have additional unique vulnerabilities that must be considered. It is useful to examine Tactical Network based ConOps and use them to construct a threat model and baseline cyber security requirements for Tactical Networks with fixed infrastructure, mobile infrastructure and/or ad hoc modes of operation. This paper will present an introduction to threat model assessment. A definition and detailed discussion of a Tactical Network threat model is also presented. Finally, the model is used to derive baseline requirements that can be used to design or evaluate a cyber security solution that can be scaled and adapted to the needs of specific deployments.

  7. Modeling documents with Generative Adversarial Networks

    OpenAIRE

    Glover, John

    2016-01-01

    This paper describes a method for using Generative Adversarial Networks to learn distributed representations of natural language documents. We propose a model that is based on the recently proposed Energy-Based GAN, but instead uses a Denoising Autoencoder as the discriminator network. Document representations are extracted from the hidden layer of the discriminator and evaluated both quantitatively and qualitatively.

  8. Designing Network-based Business Model Ontology

    DEFF Research Database (Denmark)

    Hashemi Nekoo, Ali Reza; Ashourizadeh, Shayegheh; Zarei, Behrouz

    2015-01-01

    Survival on dynamic environment is not achieved without a map. Scanning and monitoring of the market show business models as a fruitful tool. But scholars believe that old-fashioned business models are dead; as they are not included the effect of internet and network in themselves. This paper...... is going to propose e-business model ontology from the network point of view and its application in real world. The suggested ontology for network-based businesses is composed of individuals` characteristics and what kind of resources they own. also, their connections and pre-conceptions of connections...... such as shared-mental model and trust. However, it mostly covers previous business model elements. To confirm the applicability of this ontology, it has been implemented in business angel network and showed how it works....

  9. Commensurate comparisons of models with energy budget observations reveal consistent climate sensitivities

    Science.gov (United States)

    Armour, K.

    2017-12-01

    Global energy budget observations have been widely used to constrain the effective, or instantaneous climate sensitivity (ICS), producing median estimates around 2°C (Otto et al. 2013; Lewis & Curry 2015). A key question is whether the comprehensive climate models used to project future warming are consistent with these energy budget estimates of ICS. Yet, performing such comparisons has proven challenging. Within models, values of ICS robustly vary over time, as surface temperature patterns evolve with transient warming, and are generally smaller than the values of equilibrium climate sensitivity (ECS). Naively comparing values of ECS in CMIP5 models (median of about 3.4°C) to observation-based values of ICS has led to the suggestion that models are overly sensitive. This apparent discrepancy can partially be resolved by (i) comparing observation-based values of ICS to model values of ICS relevant for historical warming (Armour 2017; Proistosescu & Huybers 2017); (ii) taking into account the "efficacies" of non-CO2 radiative forcing agents (Marvel et al. 2015); and (iii) accounting for the sparseness of historical temperature observations and differences in sea-surface temperature and near-surface air temperature over the oceans (Richardson et al. 2016). Another potential source of discrepancy is a mismatch between observed and simulated surface temperature patterns over recent decades, due to either natural variability or model deficiencies in simulating historical warming patterns. The nature of the mismatch is such that simulated patterns can lead to more positive radiative feedbacks (higher ICS) relative to those engendered by observed patterns. The magnitude of this effect has not yet been addressed. Here we outline an approach to perform fully commensurate comparisons of climate models with global energy budget observations that take all of the above effects into account. We find that when apples-to-apples comparisons are made, values of ICS in models are

  10. Are water simulation models consistent with steady-state and ultrafast vibrational spectroscopy experiments?

    International Nuclear Information System (INIS)

    Schmidt, J.R.; Roberts, S.T.; Loparo, J.J.; Tokmakoff, A.; Fayer, M.D.; Skinner, J.L.

    2007-01-01

    Vibrational spectroscopy can provide important information about structure and dynamics in liquids. In the case of liquid water, this is particularly true for isotopically dilute HOD/D 2 O and HOD/H 2 O systems. Infrared and Raman line shapes for these systems were measured some time ago. Very recently, ultrafast three-pulse vibrational echo experiments have been performed on these systems, which provide new, exciting, and important dynamical benchmarks for liquid water. There has been tremendous theoretical effort expended on the development of classical simulation models for liquid water. These models have been parameterized from experimental structural and thermodynamic measurements. The goal of this paper is to determine if representative simulation models are consistent with steady-state, and especially with these new ultrafast, experiments. Such a comparison provides information about the accuracy of the dynamics of these simulation models. We perform this comparison using theoretical methods developed in previous papers, and calculate the experimental observables directly, without making the Condon and cumulant approximations, and taking into account molecular rotation, vibrational relaxation, and finite excitation pulses. On the whole, the simulation models do remarkably well; perhaps the best overall agreement with experiment comes from the SPC/E model

  11. Group Membership, Group Change, and Intergroup Attitudes: A Recategorization Model Based on Cognitive Consistency Principles

    Science.gov (United States)

    Roth, Jenny; Steffens, Melanie C.; Vignoles, Vivian L.

    2018-01-01

    The present article introduces a model based on cognitive consistency principles to predict how new identities become integrated into the self-concept, with consequences for intergroup attitudes. The model specifies four concepts (self-concept, stereotypes, identification, and group compatibility) as associative connections. The model builds on two cognitive principles, balance–congruity and imbalance–dissonance, to predict identification with social groups that people currently belong to, belonged to in the past, or newly belong to. More precisely, the model suggests that the relative strength of self-group associations (i.e., identification) depends in part on the (in)compatibility of the different social groups. Combining insights into cognitive representation of knowledge, intergroup bias, and explicit/implicit attitude change, we further derive predictions for intergroup attitudes. We suggest that intergroup attitudes alter depending on the relative associative strength between the social groups and the self, which in turn is determined by the (in)compatibility between social groups. This model unifies existing models on the integration of social identities into the self-concept by suggesting that basic cognitive mechanisms play an important role in facilitating or hindering identity integration and thus contribute to reducing or increasing intergroup bias. PMID:29681878

  12. Group Membership, Group Change, and Intergroup Attitudes: A Recategorization Model Based on Cognitive Consistency Principles

    Directory of Open Access Journals (Sweden)

    Jenny Roth

    2018-04-01

    Full Text Available The present article introduces a model based on cognitive consistency principles to predict how new identities become integrated into the self-concept, with consequences for intergroup attitudes. The model specifies four concepts (self-concept, stereotypes, identification, and group compatibility as associative connections. The model builds on two cognitive principles, balance–congruity and imbalance–dissonance, to predict identification with social groups that people currently belong to, belonged to in the past, or newly belong to. More precisely, the model suggests that the relative strength of self-group associations (i.e., identification depends in part on the (incompatibility of the different social groups. Combining insights into cognitive representation of knowledge, intergroup bias, and explicit/implicit attitude change, we further derive predictions for intergroup attitudes. We suggest that intergroup attitudes alter depending on the relative associative strength between the social groups and the self, which in turn is determined by the (incompatibility between social groups. This model unifies existing models on the integration of social identities into the self-concept by suggesting that basic cognitive mechanisms play an important role in facilitating or hindering identity integration and thus contribute to reducing or increasing intergroup bias.

  13. Group Membership, Group Change, and Intergroup Attitudes: A Recategorization Model Based on Cognitive Consistency Principles.

    Science.gov (United States)

    Roth, Jenny; Steffens, Melanie C; Vignoles, Vivian L

    2018-01-01

    The present article introduces a model based on cognitive consistency principles to predict how new identities become integrated into the self-concept, with consequences for intergroup attitudes. The model specifies four concepts (self-concept, stereotypes, identification, and group compatibility) as associative connections. The model builds on two cognitive principles, balance-congruity and imbalance-dissonance, to predict identification with social groups that people currently belong to, belonged to in the past, or newly belong to. More precisely, the model suggests that the relative strength of self-group associations (i.e., identification) depends in part on the (in)compatibility of the different social groups. Combining insights into cognitive representation of knowledge, intergroup bias, and explicit/implicit attitude change, we further derive predictions for intergroup attitudes. We suggest that intergroup attitudes alter depending on the relative associative strength between the social groups and the self, which in turn is determined by the (in)compatibility between social groups. This model unifies existing models on the integration of social identities into the self-concept by suggesting that basic cognitive mechanisms play an important role in facilitating or hindering identity integration and thus contribute to reducing or increasing intergroup bias.

  14. A Time consistent model for monetary value of man-sievert

    International Nuclear Information System (INIS)

    Na, S.H.; Kim, Sun G.

    2008-01-01

    Full text: Performing a cost-benefit analysis to establish optimum levels of radiation protection under the ALARA principle, we introduce a discrete stepwise model to evaluate man-sievert monetary value of Korea. The model formula, which is unique and country-specific, is composed of GDP, the nominal risk coefficient for cancer and hereditary effects, the aversion factor against radiation exposure, and the average life expectancy. Unlike previous researches on alpha-value assessment, we showed different alpha values optimized with respect to various ranges of individual dose, which would be more realistic and applicable to the radiation protection area. Employing economically constant term of GDP we showed the real values of man-sievert by year, which should be consistent in time series comparison even under price level fluctuation. GDP deflators of an economy have to be applied to measure one's own consistent value of radiation protection by year. In addition, we recommend that the concept of purchasing power parity should be adopted if it needs international comparison of alpha values in real terms. Finally, we explain the way that this stepwise model can be generalized simply to other countries without normalizing any country-specific factors. (author)

  15. Self-consistent nonlinear transmission line model of standing wave effects in a capacitive discharge

    International Nuclear Information System (INIS)

    Chabert, P.; Raimbault, J.L.; Rax, J.M.; Lieberman, M.A.

    2004-01-01

    It has been shown previously [Lieberman et al., Plasma Sources Sci. Technol. 11, 283 (2002)], using a non-self-consistent model based on solutions of Maxwell's equations, that several electromagnetic effects may compromise capacitive discharge uniformity. Among these, the standing wave effect dominates at low and moderate electron densities when the driving frequency is significantly greater than the usual 13.56 MHz. In the present work, two different global discharge models have been coupled to a transmission line model and used to obtain the self-consistent characteristics of the standing wave effect. An analytical solution for the wavelength λ was derived for the lossless case and compared to the numerical results. For typical plasma etching conditions (pressure 10-100 mTorr), a good approximation of the wavelength is λ/λ 0 ≅40 V 0 1/10 l -1/2 f -2/5 , where λ 0 is the wavelength in vacuum, V 0 is the rf voltage magnitude in volts at the discharge center, l is the electrode spacing in meters, and f the driving frequency in hertz

  16. Validity test and its consistency in the construction of patient loyalty model

    Science.gov (United States)

    Yanuar, Ferra

    2016-04-01

    The main objective of this present study is to demonstrate the estimation of validity values and its consistency based on structural equation model. The method of estimation was then implemented to an empirical data in case of the construction the patient loyalty model. In the hypothesis model, service quality, patient satisfaction and patient loyalty were determined simultaneously, each factor were measured by any indicator variables. The respondents involved in this study were the patients who ever got healthcare at Puskesmas in Padang, West Sumatera. All 394 respondents who had complete information were included in the analysis. This study found that each construct; service quality, patient satisfaction and patient loyalty were valid. It means that all hypothesized indicator variables were significant to measure their corresponding latent variable. Service quality is the most measured by tangible, patient satisfaction is the most mesured by satisfied on service and patient loyalty is the most measured by good service quality. Meanwhile in structural equation, this study found that patient loyalty was affected by patient satisfaction positively and directly. Service quality affected patient loyalty indirectly with patient satisfaction as mediator variable between both latent variables. Both structural equations were also valid. This study also proved that validity values which obtained here were also consistence based on simulation study using bootstrap approach.

  17. A Network-Individual-Resource Model for HIV Prevention

    Science.gov (United States)

    Johnson, Blair T.; Redding, Colleen A.; DiClemente, Ralph J.; Mustanski, Brian S.; Dodge, Brian M.; Sheeran, Paschal; Warren, Michelle R.; Zimmerman, Rick S.; Fisher, William A.; Conner, Mark T.; Carey, Michael P.; Fisher, Jeffrey D.; Stall, Ronald D.; Fishbein, Martin

    2014-01-01

    HIV is transmitted through dyadic exchanges of individuals linked in transitory or permanent networks of varying sizes. To optimize prevention efficacy, a complementary theoretical perspective that bridges key individual level elements with important network elements can be a foundation for developing and implementing HIV interventions with outcomes that are more sustainable over time and have greater dissemination potential. Toward that end, we introduce a Network-Individual-Resource (NIR) model for HIV prevention that recognizes how exchanges of resources between individuals and their networks underlies and sustains HIV-risk behaviors. Individual behavior change for HIV prevention, then, may be dependent on increasing the supportiveness of that individual's relevant networks for such change. Among other implications, an NIR model predicts that the success of prevention efforts depends on whether the prevention efforts (1) prompt behavior changes that can be sustained by the resources the individual or their networks possess; (2) meet individual and network needs and are consistent with the individual's current situation/developmental stage; (3) are trusted and valued; and (4) target high HIV-prevalence networks. PMID:20862606

  18. Dynamic consistency of leader/fringe models of exhaustible resource markets

    International Nuclear Information System (INIS)

    Pelot, R.P.

    1990-01-01

    A dynamic feedback pricing model is developed for a leader/fringe supply market of exhaustible resources. The discrete game optimization model includes marginal costs which may be quadratic functions of cumulative production, a linear demand curve and variable length periods. The multiperiod formulation is based on the nesting of later periods' Kuhn-Tucker conditions into earlier periods' optimizations. This procedure leads to dynamically consistent solutions where the leader's strategy is credible as he has no incentive to alter his original plan at some later stage. A static leader-fringe model may yield multiple local optima. This can result in the leader forcing the fringe to produce at their capacity constraint, which would otherwise be non-binding if it is greater than the fringe's unconstrained optimal production rate. Conditions are developed where the optimal solution occurs at a corner where constraints meet, of which limit pricing is a special case. The 2-period leader/fringe feedback model is compared to the computationally simpler open-loop model. Under certain conditions, the open-loop model yields the same result as the feedback model. A multiperiod feedback model of the world oil market with OPEC as price-leader and the remaining world oil suppliers comprising the fringe is compared with the open-loop solution. The optimal profits and prices are very similar, but large differences in production rates may occur. The exhaustion date predicted by the open-loop model may also differ from the feedback outcome. Some numerical tests result in non-contiguous production periods for a player or limit pricing phases. 85 refs., 60 figs., 30 tabs

  19. Self-consistent Bulge/Disk/Halo Galaxy Dynamical Modeling Using Integral Field Kinematics

    Science.gov (United States)

    Taranu, D. S.; Obreschkow, D.; Dubinski, J. J.; Fogarty, L. M. R.; van de Sande, J.; Catinella, B.; Cortese, L.; Moffett, A.; Robotham, A. S. G.; Allen, J. T.; Bland-Hawthorn, J.; Bryant, J. J.; Colless, M.; Croom, S. M.; D'Eugenio, F.; Davies, R. L.; Drinkwater, M. J.; Driver, S. P.; Goodwin, M.; Konstantopoulos, I. S.; Lawrence, J. S.; López-Sánchez, Á. R.; Lorente, N. P. F.; Medling, A. M.; Mould, J. R.; Owers, M. S.; Power, C.; Richards, S. N.; Tonini, C.

    2017-11-01

    We introduce a method for modeling disk galaxies designed to take full advantage of data from integral field spectroscopy (IFS). The method fits equilibrium models to simultaneously reproduce the surface brightness, rotation, and velocity dispersion profiles of a galaxy. The models are fully self-consistent 6D distribution functions for a galaxy with a Sérsic profile stellar bulge, exponential disk, and parametric dark-matter halo, generated by an updated version of GalactICS. By creating realistic flux-weighted maps of the kinematic moments (flux, mean velocity, and dispersion), we simultaneously fit photometric and spectroscopic data using both maximum-likelihood and Bayesian (MCMC) techniques. We apply the method to a GAMA spiral galaxy (G79635) with kinematics from the SAMI Galaxy Survey and deep g- and r-band photometry from the VST-KiDS survey, comparing parameter constraints with those from traditional 2D bulge-disk decomposition. Our method returns broadly consistent results for shared parameters while constraining the mass-to-light ratios of stellar components and reproducing the H I-inferred circular velocity well beyond the limits of the SAMI data. Although the method is tailored for fitting integral field kinematic data, it can use other dynamical constraints like central fiber dispersions and H I circular velocities, and is well-suited for modeling galaxies with a combination of deep imaging and H I and/or optical spectra (resolved or otherwise). Our implementation (MagRite) is computationally efficient and can generate well-resolved models and kinematic maps in under a minute on modern processors.

  20. A self-consistent model of the three-phase interstellar medium in disk galaxies

    International Nuclear Information System (INIS)

    Wang, Z.

    1989-01-01

    In the present study the author analyzes a number of physical processes concerning velocity and spatial distributions, ionization structure, pressure variation, mass and energy balance, and equation of state of the diffuse interstellar gas in a three phase model. He also considers the effects of this model on the formation of molecular clouds and the evolution of disk galaxies. The primary purpose is to incorporate self-consistently the interstellar conditions in a typical late-type galaxy, and to relate these to various observed large-scale phenomena. He models idealized situations both analytically and numerically, and compares the results with observational data of the Milky Way Galaxy and other nearby disk galaxies. Several main conclusions of this study are: (1) the highly ionized gas found in the lower Galactic halo is shown to be consistent with a model in which the gas is photoionized by the diffuse ultraviolet radiation; (2) in a quasi-static and self-regulatory configuration, the photoelectric effects of interstellar grains are primarily responsible for heating the cold (T ≅ 100K) gas; the warm (T ≅ 8,000K) gas may be heated by supernova remnants and other mechanisms; (3) the large-scale atomic and molecular gas distributions in a sample of 15 disk galaxies can be well explained if molecular cloud formation and star formation follow a modified Schmidt Law; a scaling law for the radial gas profiles is proposed based on this model, and it is shown to be applicable to the nearby late-type galaxies where radio mapping data is available; for disk galaxies of earlier type, the effect of their massive central bulges may have to be taken into account

  1. RPA method based on the self-consistent cranking model for 168Er and 158Dy

    International Nuclear Information System (INIS)

    Kvasil, J.; Cwiok, S.; Chariev, M.M.; Choriev, B.

    1983-01-01

    The low-lying nuclear states in 168 Er and 158 Dy are analysed within the random phase approximation (RPA) method based on the self-consistent cranking model (SCCM). The moment of inertia, the value of chemical potential, and the strength constant k 1 have been obtained from the symmetry condition. The pairing strength constants Gsub(tau) have been determined from the experimental values of neutron and proton pairing energies for nonrotating nuclei. A quite good agreement with experimental energies of states with positive parity was obtained without introducing the two-phonon vibrational states

  2. Quest for consistent modelling of statistical decay of the compound nucleus

    Science.gov (United States)

    Banerjee, Tathagata; Nath, S.; Pal, Santanu

    2018-01-01

    A statistical model description of heavy ion induced fusion-fission reactions is presented where shell effects, collective enhancement of level density, tilting away effect of compound nuclear spin and dissipation are included. It is shown that the inclusion of all these effects provides a consistent picture of fission where fission hindrance is required to explain the experimental values of both pre-scission neutron multiplicities and evaporation residue cross-sections in contrast to some of the earlier works where a fission hindrance is required for pre-scission neutrons but a fission enhancement for evaporation residue cross-sections.

  3. A self-consistent model for thermodynamics of multicomponent solid solutions

    International Nuclear Information System (INIS)

    Svoboda, J.; Fischer, F.D.

    2016-01-01

    The self-consistent concept recently published in this journal (108, 27–30, 2015) is extended from a binary to a multicomponent system. This is possible by exploiting the trapping concept as basis for including the interaction of atoms in terms of pairs (e.g. A–A, B–B, C–C…) and couples (e.g. A–B, B–C, …) in a multicomponent system with A as solvent and B, C, … as dilute solutes. The model results in a formulation of Gibbs-energy, which can be minimized. Examples show that the couple and pair formation may influence the equilibrium Gibbs energy markedly.

  4. Modeling trust context in networks

    CERN Document Server

    Adali, Sibel

    2013-01-01

    We make complex decisions every day, requiring trust in many different entities for different reasons. These decisions are not made by combining many isolated trust evaluations. Many interlocking factors play a role, each dynamically impacting the others.? In this brief, 'trust context' is defined as the system level description of how the trust evaluation process unfolds.Networks today are part of almost all human activity, supporting and shaping it. Applications increasingly incorporate new interdependencies and new trust contexts. Social networks connect people and organizations throughout

  5. Mathematical model of highways network optimization

    Science.gov (United States)

    Sakhapov, R. L.; Nikolaeva, R. V.; Gatiyatullin, M. H.; Makhmutov, M. M.

    2017-12-01

    The article deals with the issue of highways network design. Studies show that the main requirement from road transport for the road network is to ensure the realization of all the transport links served by it, with the least possible cost. The goal of optimizing the network of highways is to increase the efficiency of transport. It is necessary to take into account a large number of factors that make it difficult to quantify and qualify their impact on the road network. In this paper, we propose building an optimal variant for locating the road network on the basis of a mathematical model. The article defines the criteria for optimality and objective functions that reflect the requirements for the road network. The most fully satisfying condition for optimality is the minimization of road and transport costs. We adopted this indicator as a criterion of optimality in the economic-mathematical model of a network of highways. Studies have shown that each offset point in the optimal binding road network is associated with all other corresponding points in the directions providing the least financial costs necessary to move passengers and cargo from this point to the other corresponding points. The article presents general principles for constructing an optimal network of roads.

  6. A Self-Consistent Fault Slip Model for the 2011 Tohoku Earthquake and Tsunami

    Science.gov (United States)

    Yamazaki, Yoshiki; Cheung, Kwok Fai; Lay, Thorne

    2018-02-01

    The unprecedented geophysical and hydrographic data sets from the 2011 Tohoku earthquake and tsunami have facilitated numerous modeling and inversion analyses for a wide range of dislocation models. Significant uncertainties remain in the slip distribution as well as the possible contribution of tsunami excitation from submarine slumping or anelastic wedge deformation. We seek a self-consistent model for the primary teleseismic and tsunami observations through an iterative approach that begins with downsampling of a finite fault model inverted from global seismic records. Direct adjustment of the fault displacement guided by high-resolution forward modeling of near-field tsunami waveform and runup measurements improves the features that are not satisfactorily accounted for by the seismic wave inversion. The results show acute sensitivity of the runup to impulsive tsunami waves generated by near-trench slip. The adjusted finite fault model is able to reproduce the DART records across the Pacific Ocean in forward modeling of the far-field tsunami as well as the global seismic records through a finer-scale subfault moment- and rake-constrained inversion, thereby validating its ability to account for the tsunami and teleseismic observations without requiring an exotic source. The upsampled final model gives reasonably good fits to onshore and offshore geodetic observations albeit early after-slip effects and wedge faulting that cannot be reliably accounted for. The large predicted slip of over 20 m at shallow depth extending northward to 39.7°N indicates extensive rerupture and reduced seismic hazard of the 1896 tsunami earthquake zone, as inferred to varying extents by several recent joint and tsunami-only inversions.

  7. Comparison of squashing and self-consistent input-output models of quantum feedback

    Science.gov (United States)

    Peřinová, V.; Lukš, A.; Křepelka, J.

    2018-03-01

    The paper (Yanagisawa and Hope, 2010) opens with two ways of analysis of a measurement-based quantum feedback. The scheme of the feedback includes, along with the homodyne detector, a modulator and a beamsplitter, which does not enable one to extract the nonclassical field. In the present scheme, the beamsplitter is replaced by the quantum noise evader, which makes it possible to extract the nonclassical field. We re-approach the comparison of two models related to the same scheme. The first one admits that in the feedback loop between the photon annihilation and creation operators, unusual commutation relations hold. As a consequence, in the feedback loop, squashing of the light occurs. In the second one, the description arrives at the feedback loop via unitary transformations. But it is obvious that the unitary transformation which describes the modulator changes even the annihilation operator of the mode which passes by the modulator which is not natural. The first model could be called "squashing model" and the second one could be named "self-consistent model". Although the predictions of the two models differ only a little and both the ways of analysis have their advantages, they have also their drawbacks and further investigation is possible.

  8. A comprehensive, consistent and systematic mathematical model of PEM fuel cells

    International Nuclear Information System (INIS)

    Baschuk, J.J.; Li Xianguo

    2009-01-01

    This paper presents a comprehensive, consistent and systematic mathematical model for PEM fuel cells that can be used as the general formulation for the simulation and analysis of PEM fuel cells. As an illustration, the model is applied to an isothermal, steady state, two-dimensional PEM fuel cell. Water is assumed to be in either the gas phase or as a liquid phase in the pores of the polymer electrolyte. The model includes the transport of gas in the gas flow channels, electrode backing and catalyst layers; the transport of water and hydronium in the polymer electrolyte of the catalyst and polymer electrolyte layers; and the transport of electrical current in the solid phase. Water and ion transport in the polymer electrolyte was modeled using the generalized Stefan-Maxwell equations, based on non-equilibrium thermodynamics. Model simulations show that the bulk, convective gas velocity facilitates hydrogen transport from the gas flow channels to the anode catalyst layers, but inhibits oxygen transport. While some of the water required by the anode is supplied by the water produced in the cathode, the majority of water must be supplied by the anode gas phase, making operation with fully humidified reactants necessary. The length of the gas flow channel has a significant effect on the current production of the PEM fuel cell, with a longer channel length having a lower performance relative to a shorter channel length. This lower performance is caused by a greater variation in water content within the longer channel length

  9. Synaptic model for spontaneous activity in developing networks

    DEFF Research Database (Denmark)

    Lerchner, Alexander; Rinzel, J.

    2005-01-01

    Spontaneous rhythmic activity occurs in many developing neural networks. The activity in these hyperexcitable networks is comprised of recurring "episodes" consisting of "cycles" of high activity that alternate with "silent phases" with little or no activity. We introduce a new model of synaptic...... dynamics that takes into account that only a fraction of the vesicles stored in a synaptic terminal is readily available for release. We show that our model can reproduce spontaneous rhythmic activity with the same general features as observed in experiments, including a positive correlation between...

  10. Consistent modelling of wind turbine noise propagation from source to receiver

    DEFF Research Database (Denmark)

    Barlas, Emre; Zhu, Wei Jun; Shen, Wen Zhong

    2017-01-01

    The unsteady nature of wind turbine noise is a major reason for annoyance. The variation of far-field sound pressure levels is not only caused by the continuous change in wind turbine noise source levels but also by the unsteady flow field and the ground characteristics between the turbine...... propagation of a 5 MW wind turbine is investigated. Sound pressure level time series evaluated at the source time are studied for varying wind speeds, surface roughness, and ground impedances within a 2000 m radius from the turbine....... and receiver. To take these phenomena into account, a consistent numerical technique that models the sound propagation from the source to receiver is developed. Large eddy simulation with an actuator line technique is employed for the flow modelling and the corresponding flow fields are used to simulate sound...

  11. Self-consistent model for pulsed direct-current N2 glow discharge

    International Nuclear Information System (INIS)

    Liu Chengsen

    2005-01-01

    A self-consistent analysis of a pulsed direct-current (DC) N 2 glow discharge is presented. The model is based on a numerical solution of the continuity equations for electron and ions coupled with Poisson's equation. The spatial-temporal variations of ionic and electronic densities and electric field are obtained. The electric field structure exhibits all the characteristic regions of a typical glow discharge (the cathode fall, the negative glow, and the positive column). Current-voltage characteristics of the discharge can be obtained from the model. The calculated current-voltage results using a constant secondary electron emission coefficient for the gas pressure 133.32 Pa are in reasonable agreement with experiment. (authors)

  12. A self-consistent model for polycrystal deformation. Description and implementation

    International Nuclear Information System (INIS)

    Clausen, B.; Lorentzen, T.

    1997-04-01

    This report is a manual for the ANSI C implementation of an incremental elastic-plastic rate-insensitive self-consistent polycrystal deformation model based on (Hutchinson 1970). The model is furthermore described in the Ph.D. thesis by Clausen (Clausen 1997). The structure of the main program, sc m odel.c, and its subroutines are described with flow-charts. Likewise the pre-processor, sc i ni.c, is described with a flowchart. Default values of all the input parameters are given in the pre-processor, but the user is able to select from other pre-defined values or enter new values. A sample calculation is made and the results are presented as plots and examples of the output files are shown. (au) 4 tabs., 28 ills., 17 refs

  13. A self-consistent model for polycrystal deformation. Description and implementation

    Energy Technology Data Exchange (ETDEWEB)

    Clausen, B.; Lorentzen, T.

    1997-04-01

    This report is a manual for the ANSI C implementation of an incremental elastic-plastic rate-insensitive self-consistent polycrystal deformation model based on (Hutchinson 1970). The model is furthermore described in the Ph.D. thesis by Clausen (Clausen 1997). The structure of the main program, sc{sub m}odel.c, and its subroutines are described with flow-charts. Likewise the pre-processor, sc{sub i}ni.c, is described with a flowchart. Default values of all the input parameters are given in the pre-processor, but the user is able to select from other pre-defined values or enter new values. A sample calculation is made and the results are presented as plots and examples of the output files are shown. (au) 4 tabs., 28 ills., 17 refs.

  14. Graphical Model Theory for Wireless Sensor Networks

    International Nuclear Information System (INIS)

    Davis, William B.

    2002-01-01

    Information processing in sensor networks, with many small processors, demands a theory of computation that allows the minimization of processing effort, and the distribution of this effort throughout the network. Graphical model theory provides a probabilistic theory of computation that explicitly addresses complexity and decentralization for optimizing network computation. The junction tree algorithm, for decentralized inference on graphical probability models, can be instantiated in a variety of applications useful for wireless sensor networks, including: sensor validation and fusion; data compression and channel coding; expert systems, with decentralized data structures, and efficient local queries; pattern classification, and machine learning. Graphical models for these applications are sketched, and a model of dynamic sensor validation and fusion is presented in more depth, to illustrate the junction tree algorithm

  15. Modeling Network Traffic in Wavelet Domain

    Directory of Open Access Journals (Sweden)

    Sheng Ma

    2004-12-01

    Full Text Available This work discovers that although network traffic has the complicated short- and long-range temporal dependence, the corresponding wavelet coefficients are no longer long-range dependent. Therefore, a "short-range" dependent process can be used to model network traffic in the wavelet domain. Both independent and Markov models are investigated. Theoretical analysis shows that the independent wavelet model is sufficiently accurate in terms of the buffer overflow probability for Fractional Gaussian Noise traffic. Any model, which captures additional correlations in the wavelet domain, only improves the performance marginally. The independent wavelet model is then used as a unified approach to model network traffic including VBR MPEG video and Ethernet data. The computational complexity is O(N for developing such wavelet models and generating synthesized traffic of length N, which is among the lowest attained.

  16. Sparsity in Model Gene Regulatory Networks

    International Nuclear Information System (INIS)

    Zagorski, M.

    2011-01-01

    We propose a gene regulatory network model which incorporates the microscopic interactions between genes and transcription factors. In particular the gene's expression level is determined by deterministic synchronous dynamics with contribution from excitatory interactions. We study the structure of networks that have a particular '' function '' and are subject to the natural selection pressure. The question of network robustness against point mutations is addressed, and we conclude that only a small part of connections defined as '' essential '' for cell's existence is fragile. Additionally, the obtained networks are sparse with narrow in-degree and broad out-degree, properties well known from experimental study of biological regulatory networks. Furthermore, during sampling procedure we observe that significantly different genotypes can emerge under mutation-selection balance. All the preceding features hold for the model parameters which lay in the experimentally relevant range. (author)

  17. Self-Consistent Generation of Primordial Continental Crust in Global Mantle Convection Models

    Science.gov (United States)

    Jain, C.; Rozel, A.; Tackley, P. J.

    2017-12-01

    We present the generation of primordial continental crust (TTG rocks) using self-consistent and evolutionary thermochemical mantle convection models (Tackley, PEPI 2008). Numerical modelling commonly shows that mantle convection and continents have strong feedbacks on each other. However in most studies, continents are inserted a priori while basaltic (oceanic) crust is generated self-consistently in some models (Lourenco et al., EPSL 2016). Formation of primordial continental crust happened by fractional melting and crystallisation in episodes of relatively rapid growth from late Archean to late Proterozoic eras (3-1 Ga) (Hawkesworth & Kemp, Nature 2006) and it has also been linked to the onset of plate tectonics around 3 Ga. It takes several stages of differentiation to generate Tonalite-Trondhjemite-Granodiorite (TTG) rocks or proto-continents. First, the basaltic magma is extracted from the pyrolitic mantle which is both erupted at the surface and intruded at the base of the crust. Second, it goes through eclogitic transformation and then partially melts to form TTGs (Rudnick, Nature 1995; Herzberg & Rudnick, Lithos 2012). TTGs account for the majority of the Archean continental crust. Based on the melting conditions proposed by Moyen (Lithos 2011), the feasibility of generating TTG rocks in numerical simulations has already been demonstrated by Rozel et al. (Nature, 2017). Here, we have developed the code further by parameterising TTG formation. We vary the ratio of intrusive (plutonic) and extrusive (volcanic) magmatism (Crisp, Volcanol. Geotherm. 1984) to study the relative volumes of three petrological TTG compositions as reported from field data (Moyen, Lithos 2011). Furthermore, we systematically vary parameters such as friction coefficient, initial core temperature and composition-dependent viscosity to investigate the global tectonic regime of early Earth. Continental crust can also be destroyed by subduction or delamination. We will investigate

  18. Self-consistent modeling of plasma response to impurity spreading from intense localized source

    International Nuclear Information System (INIS)

    Koltunov, Mikhail

    2012-07-01

    Non-hydrogen impurities unavoidably exist in hot plasmas of present fusion devices. They enter it intrinsically, due to plasma interaction with the wall of vacuum vessel, as well as are seeded for various purposes deliberately. Normally, the spots where injected particles enter the plasma are much smaller than its total surface. Under such conditions one has to expect a significant modification of local plasma parameters through various physical mechanisms, which, in turn, affect the impurity spreading. Self-consistent modeling of interaction between impurity and plasma is, therefore, not possible with linear approaches. A model based on the fluid description of electrons, main and impurity ions, and taking into account the plasma quasi-neutrality, Coulomb collisions of background and impurity charged particles, radiation losses, particle transport to bounding surfaces, is elaborated in this work. To describe the impurity spreading and the plasma response self-consistently, fluid equations for the particle, momentum and energy balances of various plasma components are solved by reducing them to ordinary differential equations for the time evolution of several parameters characterizing the solution in principal details: the magnitudes of plasma density and plasma temperatures in the regions of impurity localization and the spatial scales of these regions. The results of calculations for plasma conditions typical in tokamak experiments with impurity injection are presented. A new mechanism for the condensation phenomenon and formation of cold dense plasma structures is proposed.

  19. Towards a consistent geochemical model for prediction of uranium(VI) removal from groundwater by ferrihydrite

    International Nuclear Information System (INIS)

    Gustafsson, Jon Petter; Daessman, Ellinor; Baeckstroem, Mattias

    2009-01-01

    Uranium(VI), which is often elevated in granitoidic groundwaters, is known to adsorb strongly to Fe (hydr)oxides under certain conditions. This process can be used in water treatment to remove U(VI). To develop a consistent geochemical model for U(VI) adsorption to ferrihydrite, batch experiments were performed and previous data sets reviewed to optimize a set of surface complexation constants using the 3-plane CD-MUSIC model. To consider the effect of dissolved organic matter (DOM) on U(VI) speciation, new parameters for the Stockholm Humic Model (SHM) were optimized using previously published data. The model, which was constrained from available X-ray absorption fine structure (EXAFS) spectroscopy evidence, fitted the data well when the surface sites were divided into low- and high-affinity binding sites. Application of the model concept to other published data sets revealed differences in the reactivity of different ferrihydrites towards U(VI). Use of the optimized SHM parameters for U(VI)-DOM complexation showed that this process is important for U(VI) speciation at low pH. However in neutral to alkaline waters with substantial carbonate present, Ca-U-CO 3 complexes predominate. The calibrated geochemical model was used to simulate U(VI) adsorption to ferrihydrite for a hypothetical groundwater in the presence of several competitive ions. The results showed that U(VI) adsorption was strong between pH 5 and 8. Also near the calcite saturation limit, where U(VI) adsorption was weakest according to the model, the adsorption percentage was predicted to be >80%. Hence U(VI) adsorption to ferrihydrite-containing sorbents may be used as a method to bring down U(VI) concentrations to acceptable levels in groundwater

  20. Thermodynamically Consistent Algorithms for the Solution of Phase-Field Models

    KAUST Repository

    Vignal, Philippe

    2016-02-11

    Phase-field models are emerging as a promising strategy to simulate interfacial phenomena. Rather than tracking interfaces explicitly as done in sharp interface descriptions, these models use a diffuse order parameter to monitor interfaces implicitly. This implicit description, as well as solid physical and mathematical footings, allow phase-field models to overcome problems found by predecessors. Nonetheless, the method has significant drawbacks. The phase-field framework relies on the solution of high-order, nonlinear partial differential equations. Solving these equations entails a considerable computational cost, so finding efficient strategies to handle them is important. Also, standard discretization strategies can many times lead to incorrect solutions. This happens because, for numerical solutions to phase-field equations to be valid, physical conditions such as mass conservation and free energy monotonicity need to be guaranteed. In this work, we focus on the development of thermodynamically consistent algorithms for time integration of phase-field models. The first part of this thesis focuses on an energy-stable numerical strategy developed for the phase-field crystal equation. This model was put forward to model microstructure evolution. The algorithm developed conserves, guarantees energy stability and is second order accurate in time. The second part of the thesis presents two numerical schemes that generalize literature regarding energy-stable methods for conserved and non-conserved phase-field models. The time discretization strategies can conserve mass if needed, are energy-stable, and second order accurate in time. We also develop an adaptive time-stepping strategy, which can be applied to any second-order accurate scheme. This time-adaptive strategy relies on a backward approximation to give an accurate error estimator. The spatial discretization, in both parts, relies on a mixed finite element formulation and isogeometric analysis. The codes are

  1. A complex network based model for detecting isolated communities in water distribution networks

    Science.gov (United States)

    Sheng, Nan; Jia, Youwei; Xu, Zhao; Ho, Siu-Lau; Wai Kan, Chi

    2013-12-01

    Water distribution network (WDN) is a typical real-world complex network of major infrastructure that plays an important role in human's daily life. In this paper, we explore the formation of isolated communities in WDN based on complex network theory. A graph-algebraic model is proposed to effectively detect the potential communities due to pipeline failures. This model can properly illustrate the connectivity and evolution of WDN during different stages of contingency events, and identify the emerging isolated communities through spectral analysis on Laplacian matrix. A case study on a practical urban WDN in China is conducted, and the consistency between the simulation results and the historical data are reported to showcase the feasibility and effectiveness of the proposed model.

  2. Super capacitor modeling with artificial neural network (ANN)

    Energy Technology Data Exchange (ETDEWEB)

    Marie-Francoise, J.N.; Gualous, H.; Berthon, A. [Universite de Franche-Comte, Lab. en Electronique, Electrotechnique et Systemes (L2ES), UTBM, INRETS (LRE T31) 90 - Belfort (France)

    2004-07-01

    This paper presents super-capacitors modeling using Artificial Neural Network (ANN). The principle consists on a black box nonlinear multiple inputs single output (MISO) model. The system inputs are temperature and current, the output is the super-capacitor voltage. The learning and the validation of the ANN model from experimental charge and discharge of super-capacitor establish the relationship between inputs and output. The learning and the validation of the ANN model use experimental results of 2700 F, 3700 F and a super-capacitor pack. Once the network is trained, the ANN model can predict the super-capacitor behaviour with temperature variations. The update parameters of the ANN model are performed thanks to Levenberg-Marquardt method in order to minimize the error between the output of the system and the predicted output. The obtained results with the ANN model of super-capacitor and experimental ones are in good agreement. (authors)

  3. Bayesian network models for error detection in radiotherapy plans

    International Nuclear Information System (INIS)

    Kalet, Alan M; Ford, Eric C; Phillips, Mark H; Gennari, John H

    2015-01-01

    The purpose of this study is to design and develop a probabilistic network for detecting errors in radiotherapy plans for use at the time of initial plan verification. Our group has initiated a multi-pronged approach to reduce these errors. We report on our development of Bayesian models of radiotherapy plans. Bayesian networks consist of joint probability distributions that define the probability of one event, given some set of other known information. Using the networks, we find the probability of obtaining certain radiotherapy parameters, given a set of initial clinical information. A low probability in a propagated network then corresponds to potential errors to be flagged for investigation. To build our networks we first interviewed medical physicists and other domain experts to identify the relevant radiotherapy concepts and their associated interdependencies and to construct a network topology. Next, to populate the network’s conditional probability tables, we used the Hugin Expert software to learn parameter distributions from a subset of de-identified data derived from a radiation oncology based clinical information database system. These data represent 4990 unique prescription cases over a 5 year period. Under test case scenarios with approximately 1.5% introduced error rates, network performance produced areas under the ROC curve of 0.88, 0.98, and 0.89 for the lung, brain and female breast cancer error detection networks, respectively. Comparison of the brain network to human experts performance (AUC of 0.90 ± 0.01) shows the Bayes network model performs better than domain experts under the same test conditions. Our results demonstrate the feasibility and effectiveness of comprehensive probabilistic models as part of decision support systems for improved detection of errors in initial radiotherapy plan verification procedures. (paper)

  4. Consistent post-reaction vibrational energy redistribution in DSMC simulations using TCE model

    Science.gov (United States)

    Borges Sebastião, Israel; Alexeenko, Alina

    2016-10-01

    The direct simulation Monte Carlo (DSMC) method has been widely applied to study shockwaves, hypersonic reentry flows, and other nonequilibrium flow phenomena. Although there is currently active research on high-fidelity models based on ab initio data, the total collision energy (TCE) and Larsen-Borgnakke (LB) models remain the most often used chemistry and relaxation models in DSMC simulations, respectively. The conventional implementation of the discrete LB model, however, may not satisfy detailed balance when recombination and exchange reactions play an important role in the flow energy balance. This issue can become even more critical in reacting mixtures involving polyatomic molecules, such as in combustion. In this work, this important shortcoming is addressed and an empirical approach to consistently specify the post-reaction vibrational states close to thermochemical equilibrium conditions is proposed within the TCE framework. Following Bird's quantum-kinetic (QK) methodology for populating post-reaction states, the new TCE-based approach involves two main steps. The state-specific TCE reaction probabilities for a forward reaction are first pre-computed from equilibrium 0-D simulations. These probabilities are then employed to populate the post-reaction vibrational states of the corresponding reverse reaction. The new approach is illustrated by application to exchange and recombination reactions relevant to H2-O2 combustion processes.

  5. Consistency of climate change projections from multiple global and regional model intercomparison projects

    Science.gov (United States)

    Fernández, J.; Frías, M. D.; Cabos, W. D.; Cofiño, A. S.; Domínguez, M.; Fita, L.; Gaertner, M. A.; García-Díez, M.; Gutiérrez, J. M.; Jiménez-Guerrero, P.; Liguori, G.; Montávez, J. P.; Romera, R.; Sánchez, E.

    2018-03-01

    We present an unprecedented ensemble of 196 future climate projections arising from different global and regional model intercomparison projects (MIPs): CMIP3, CMIP5, ENSEMBLES, ESCENA, EURO- and Med-CORDEX. This multi-MIP ensemble includes all regional climate model (RCM) projections publicly available to date, along with their driving global climate models (GCMs). We illustrate consistent and conflicting messages using continental Spain and the Balearic Islands as target region. The study considers near future (2021-2050) changes and their dependence on several uncertainty sources sampled in the multi-MIP ensemble: GCM, future scenario, internal variability, RCM, and spatial resolution. This initial work focuses on mean seasonal precipitation and temperature changes. The results show that the potential GCM-RCM combinations have been explored very unevenly, with favoured GCMs and large ensembles of a few RCMs that do not respond to any ensemble design. Therefore, the grand-ensemble is weighted towards a few models. The selection of a balanced, credible sub-ensemble is challenged in this study by illustrating several conflicting responses between the RCM and its driving GCM and among different RCMs. Sub-ensembles from different initiatives are dominated by different uncertainty sources, being the driving GCM the main contributor to uncertainty in the grand-ensemble. For this analysis of the near future changes, the emission scenario does not lead to a strong uncertainty. Despite the extra computational effort, for mean seasonal changes, the increase in resolution does not lead to important changes.

  6. Physically-consistent wall boundary conditions for the k-ω turbulence model

    DEFF Research Database (Denmark)

    Fuhrman, David R.; Dixen, Martin; Jacobsen, Niels Gjøl

    2010-01-01

    A model solving Reynolds-averaged Navier–Stokes equations, coupled with k-v turbulence closure, is used to simulate steady channel flow on both hydraulically smooth and rough beds. Novel experimental data are used as model validation, with k measured directly from all three components of the fluc......A model solving Reynolds-averaged Navier–Stokes equations, coupled with k-v turbulence closure, is used to simulate steady channel flow on both hydraulically smooth and rough beds. Novel experimental data are used as model validation, with k measured directly from all three components...... of the fluctuating velocity signal. Both conventional k = 0 and dk/dy = 0 wall boundary conditions are considered. Results indicate that either condition can provide accurate solutions, for the bulk of the flow, over both smooth and rough beds. It is argued that the zero-gradient condition is more consistent...... with the near wall physics, however, as it allows direct integration through a viscous sublayer near smooth walls, while avoiding a viscous sublayer near rough walls. This is in contrast to the conventional k = 0 wall boundary condition, which forces resolution of a viscous sublayer in all circumstances...

  7. Consistency and discrepancy in the atmospheric response to Arctic sea-ice loss across climate models

    Science.gov (United States)

    Screen, James A.; Deser, Clara; Smith, Doug M.; Zhang, Xiangdong; Blackport, Russell; Kushner, Paul J.; Oudar, Thomas; McCusker, Kelly E.; Sun, Lantao

    2018-03-01

    The decline of Arctic sea ice is an integral part of anthropogenic climate change. Sea-ice loss is already having a significant impact on Arctic communities and ecosystems. Its role as a cause of climate changes outside of the Arctic has also attracted much scientific interest. Evidence is mounting that Arctic sea-ice loss can affect weather and climate throughout the Northern Hemisphere. The remote impacts of Arctic sea-ice loss can only be properly represented using models that simulate interactions among the ocean, sea ice, land and atmosphere. A synthesis of six such experiments with different models shows consistent hemispheric-wide atmospheric warming, strongest in the mid-to-high-latitude lower troposphere; an intensification of the wintertime Aleutian Low and, in most cases, the Siberian High; a weakening of the Icelandic Low; and a reduction in strength and southward shift of the mid-latitude westerly winds in winter. The atmospheric circulation response seems to be sensitive to the magnitude and geographic pattern of sea-ice loss and, in some cases, to the background climate state. However, it is unclear whether current-generation climate models respond too weakly to sea-ice change. We advocate for coordinated experiments that use different models and observational constraints to quantify the climate response to Arctic sea-ice loss.

  8. Consistent modelling of wind turbine noise propagation from source to receiver.

    Science.gov (United States)

    Barlas, Emre; Zhu, Wei Jun; Shen, Wen Zhong; Dag, Kaya O; Moriarty, Patrick

    2017-11-01

    The unsteady nature of wind turbine noise is a major reason for annoyance. The variation of far-field sound pressure levels is not only caused by the continuous change in wind turbine noise source levels but also by the unsteady flow field and the ground characteristics between the turbine and receiver. To take these phenomena into account, a consistent numerical technique that models the sound propagation from the source to receiver is developed. Large eddy simulation with an actuator line technique is employed for the flow modelling and the corresponding flow fields are used to simulate sound generation and propagation. The local blade relative velocity, angle of attack, and turbulence characteristics are input to the sound generation model. Time-dependent blade locations and the velocity between the noise source and receiver are considered within a quasi-3D propagation model. Long-range noise propagation of a 5 MW wind turbine is investigated. Sound pressure level time series evaluated at the source time are studied for varying wind speeds, surface roughness, and ground impedances within a 2000 m radius from the turbine.

  9. Model for ICRF fast wave current drive in self-consistent MHD equilibria

    International Nuclear Information System (INIS)

    Bonoli, P.T.; Englade, R.C.; Porkolab, M.; Fenstermacher, M.E.

    1993-01-01

    Recently, a model for fast wave current drive in the ion cyclotron radio frequency (ICRF) range was incorporated into the current drive and MHD equilibrium code ACCOME. The ACCOME model combines a free boundary solution of the Grad Shafranov equation with the calculation of driven currents due to neutral beam injection, lower hybrid (LH) waves, bootstrap effects, and ICRF fast waves. The equilibrium and current drive packages iterate between each other to obtain an MHD equilibrium which is consistent with the profiles of driven current density. The ICRF current drive package combines a toroidal full-wave code (FISIC) with a parameterization of the current drive efficiency obtained from an adjoint solution of the Fokker Planck equation. The electron absorption calculation in the full-wave code properly accounts for the combined effects of electron Landau damping (ELD) and transit time magnetic pumping (TTMP), assuming a Maxwellian (or bi-Maxwellian) electron distribution function. Furthermore, the current drive efficiency includes the effects of particle trapping, momentum conserving corrections to the background Fokker Planck collision operator, and toroidally induced variations in the parallel wavenumbers of the injected ICRF waves. This model has been used to carry out detailed studies of advanced physics scenarios in the proposed Tokamak Physics Experiment (TPX). Results are shown, for example, which demonstrate the possibility of achieving stable equilibria at high beta and high bootstrap current fraction in TPX. Model results are also shown for the proposed ITER device

  10. Development of a 3D consistent 1D neutronics model for reactor core simulation

    International Nuclear Information System (INIS)

    Lee, Ki Bog; Joo, Han Gyu; Cho, Byung Oh; Zee, Sung Quun

    2001-02-01

    In this report a 3D consistent 1D model based on nonlinear analytic nodal method is developed to reproduce the 3D results. During the derivation, the current conservation factor (CCF) is introduced which guarantees the same axial neutron currents obtained from the 1D equation as the 3D reference values. Furthermore in order to properly use 1D group constants, a new 1D group constants representation scheme employing tables for the fuel temperature, moderator density and boron concentration is developed and functionalized for the control rod tip position. To test the 1D kinetics model with CCF, several steady state and transient calculations were performed and compared with 3D reference values. The errors of K-eff values were reduced about one tenth when using CCF without significant computational overhead. And the errors of power distribution were decreased to the range of one fifth or tenth at steady state calculation. The 1D kinetics model with CCF and the 1D group constant functionalization employing tables as a function of control rod tip position can provide preciser results at the steady state and transient calculation. Thus it is expected that the 1D kinetics model derived in this report can be used in the safety analysis, reactor real time simulation coupled with system analysis code, operator support system etc.

  11. A Time-Dependent Λ and G Cosmological Model Consistent with Cosmological Constraints

    Directory of Open Access Journals (Sweden)

    L. Kantha

    2016-01-01

    Full Text Available The prevailing constant Λ-G cosmological model agrees with observational evidence including the observed red shift, Big Bang Nucleosynthesis (BBN, and the current rate of acceleration. It assumes that matter contributes 27% to the current density of the universe, with the rest (73% coming from dark energy represented by the Einstein cosmological parameter Λ in the governing Friedmann-Robertson-Walker equations, derived from Einstein’s equations of general relativity. However, the principal problem is the extremely small value of the cosmological parameter (~10−52 m2. Moreover, the dark energy density represented by Λ is presumed to have remained unchanged as the universe expanded by 26 orders of magnitude. Attempts to overcome this deficiency often invoke a variable Λ-G model. Cosmic constraints from action principles require that either both G and Λ remain time-invariant or both vary in time. Here, we propose a variable Λ-G cosmological model consistent with the latest red shift data, the current acceleration rate, and BBN, provided the split between matter and dark energy is 18% and 82%. Λ decreases (Λ~τ-2, where τ is the normalized cosmic time and G increases (G~τn with cosmic time. The model results depend only on the chosen value of Λ at present and in the far future and not directly on G.

  12. Model and simulation of Krause model in dynamic open network

    Science.gov (United States)

    Zhu, Meixia; Xie, Guangqiang

    2017-08-01

    The construction of the concept of evolution is an effective way to reveal the formation of group consensus. This study is based on the modeling paradigm of the HK model (Hegsekmann-Krause). This paper analyzes the evolution of multi - agent opinion in dynamic open networks with member mobility. The results of the simulation show that when the number of agents is constant, the interval distribution of the initial distribution will affect the number of the final view, The greater the distribution of opinions, the more the number of views formed eventually; The trust threshold has a decisive effect on the number of views, and there is a negative correlation between the trust threshold and the number of opinions clusters. The higher the connectivity of the initial activity group, the more easily the subjective opinion in the evolution of opinion to achieve rapid convergence. The more open the network is more conducive to the unity of view, increase and reduce the number of agents will not affect the consistency of the group effect, but not conducive to stability.

  13. Consistency of different tropospheric models and mapping functions for precise GNSS processing

    Science.gov (United States)

    Graffigna, Victoria; Hernández-Pajares, Manuel; García-Rigo, Alberto; Gende, Mauricio

    2017-04-01

    The TOmographic Model of the IONospheric electron content (TOMION) software implements a simultaneous precise geodetic and ionospheric modeling, which can be used to test new approaches for real-time precise GNSS modeling (positioning, ionospheric and tropospheric delays, clock errors, among others). In this work, the software is used to estimate the Zenith Tropospheric Delay (ZTD) emulating real time and its performance is evaluated through a comparative analysis with a built-in GIPSY estimation and IGS final troposphere product, exemplified in a two-day experiment performed in East Australia. Furthermore, the troposphere mapping function was upgraded from Niell to Vienna approach. On a first scenario, only forward processing was activated and the coordinates of the Wide Area GNSS network were loosely constrained, without fixing the carrier phase ambiguities, for both reference and rover receivers. On a second one, precise point positioning (PPP) was implemented, iterating for a fixed coordinates set for the second day. Comparisons between TOMION, IGS and GIPSY estimates have been performed and for the first one, IGS clocks and orbits were considered. The agreement with GIPSY results seems to be 10 times better than with the IGS final ZTD product, despite having considered IGS products for the computations. Hence, the subsequent analysis was carried out with respect to the GIPSY computations. The estimates show a typical bias of 2cm for the first strategy and of 7mm for PPP, in the worst cases. Moreover, Vienna mapping function showed in general a fairly better agreement than Niell one for both strategies. The RMS values' were found to be around 1cm for all studied situations, with a slightly fitter performance for the Niell one. Further improvement could be achieved for such estimations with coefficients for the Vienna mapping function calculated from raytracing as well as integrating meteorological comparative parameters.

  14. A paradigm shift toward a consistent modeling framework to assess climate impacts

    Science.gov (United States)

    Monier, E.; Paltsev, S.; Sokolov, A. P.; Fant, C.; Chen, H.; Gao, X.; Schlosser, C. A.; Scott, J. R.; Dutkiewicz, S.; Ejaz, Q.; Couzo, E. A.; Prinn, R. G.; Haigh, M.

    2017-12-01

    Estimates of physical and economic impacts of future climate change are subject to substantial challenges. To enrich the currently popular approaches of assessing climate impacts by evaluating a damage function or by multi-model comparisons based on the Representative Concentration Pathways (RCPs), we focus here on integrating impacts into a self-consistent coupled human and Earth system modeling framework that includes modules that represent multiple physical impacts. In a sample application we show that this framework is capable of investigating the physical impacts of climate change and socio-economic stressors. The projected climate impacts vary dramatically across the globe in a set of scenarios with global mean warming ranging between 2.4°C and 3.6°C above pre-industrial by 2100. Unabated emissions lead to substantial sea level rise, acidification that impacts the base of the oceanic food chain, air pollution that exceeds health standards by tenfold, water stress that impacts an additional 1 to 2 billion people globally and agricultural productivity that decreases substantially in many parts of the world. We compare the outcomes from these forward-looking scenarios against the common goal described by the target-driven scenario of 2°C, which results in much smaller impacts. It is challenging for large internationally coordinated exercises to respond quickly to new policy targets. We propose that a paradigm shift toward a self-consistent modeling framework to assess climate impacts is needed to produce information relevant to evolving global climate policy and mitigation strategies in a timely way.

  15. A Model of Network Porosity

    Science.gov (United States)

    2016-11-09

    Figure 1. We generally express such networks in terms of the services running in each enclave as well as the routing and firewall rules between the...compromise a server, they can compromise other devices in the same subnet or protected enclave. They probe attached firewalls and routers for open ports and...spam and malware filter would prevent this content from reaching its destination. Content filtering provides another layer of defense to other controls

  16. Thermal conductivity model for nanofiber networks

    Science.gov (United States)

    Zhao, Xinpeng; Huang, Congliang; Liu, Qingkun; Smalyukh, Ivan I.; Yang, Ronggui

    2018-02-01

    Understanding thermal transport in nanofiber networks is essential for their applications in thermal management, which are used extensively as mechanically sturdy thermal insulation or high thermal conductivity materials. In this study, using the statistical theory and Fourier's law of heat conduction while accounting for both the inter-fiber contact thermal resistance and the intrinsic thermal resistance of nanofibers, an analytical model is developed to predict the thermal conductivity of nanofiber networks as a function of their geometric and thermal properties. A scaling relation between the thermal conductivity and the geometric properties including volume fraction and nanofiber length of the network is revealed. This model agrees well with both numerical simulations and experimental measurements found in the literature. This model may prove useful in analyzing the experimental results and designing nanofiber networks for both high and low thermal conductivity applications.

  17. Thermal conductivity model for nanofiber networks

    Energy Technology Data Exchange (ETDEWEB)

    Zhao, Xinpeng [Department of Mechanical Engineering, University of Colorado, Boulder, Colorado 80309, USA; Huang, Congliang [Department of Mechanical Engineering, University of Colorado, Boulder, Colorado 80309, USA; School of Electrical and Power Engineering, China University of Mining and Technology, Xuzhou 221116, China; Liu, Qingkun [Department of Physics, University of Colorado, Boulder, Colorado 80309, USA; Smalyukh, Ivan I. [Department of Physics, University of Colorado, Boulder, Colorado 80309, USA; Materials Science and Engineering Program, University of Colorado, Boulder, Colorado 80309, USA; Yang, Ronggui [Department of Mechanical Engineering, University of Colorado, Boulder, Colorado 80309, USA; Materials Science and Engineering Program, University of Colorado, Boulder, Colorado 80309, USA; Buildings and Thermal Systems Center, National Renewable Energy Laboratory, Golden, Colorado 80401, USA

    2018-02-28

    Understanding thermal transport in nanofiber networks is essential for their applications in thermal management, which are used extensively as mechanically sturdy thermal insulation or high thermal conductivity materials. In this study, using the statistical theory and Fourier's law of heat conduction while accounting for both the inter-fiber contact thermal resistance and the intrinsic thermal resistance of nanofibers, an analytical model is developed to predict the thermal conductivity of nanofiber networks as a function of their geometric and thermal properties. A scaling relation between the thermal conductivity and the geometric properties including volume fraction and nanofiber length of the network is revealed. This model agrees well with both numerical simulations and experimental measurements found in the literature. This model may prove useful in analyzing the experimental results and designing nanofiber networks for both high and low thermal conductivity applications.

  18. A quantum-implementable neural network model

    Science.gov (United States)

    Chen, Jialin; Wang, Lingli; Charbon, Edoardo

    2017-10-01

    A quantum-implementable neural network, namely quantum probability neural network (QPNN) model, is proposed in this paper. QPNN can use quantum parallelism to trace all possible network states to improve the result. Due to its unique quantum nature, this model is robust to several quantum noises under certain conditions, which can be efficiently implemented by the qubus quantum computer. Another advantage is that QPNN can be used as memory to retrieve the most relevant data and even to generate new data. The MATLAB experimental results of Iris data classification and MNIST handwriting recognition show that much less neuron resources are required in QPNN to obtain a good result than the classical feedforward neural network. The proposed QPNN model indicates that quantum effects are useful for real-life classification tasks.

  19. Combinatorial explosion in model gene networks

    Science.gov (United States)

    Edwards, R.; Glass, L.

    2000-09-01

    The explosive growth in knowledge of the genome of humans and other organisms leaves open the question of how the functioning of genes in interacting networks is coordinated for orderly activity. One approach to this problem is to study mathematical properties of abstract network models that capture the logical structures of gene networks. The principal issue is to understand how particular patterns of activity can result from particular network structures, and what types of behavior are possible. We study idealized models in which the logical structure of the network is explicitly represented by Boolean functions that can be represented by directed graphs on n-cubes, but which are continuous in time and described by differential equations, rather than being updated synchronously via a discrete clock. The equations are piecewise linear, which allows significant analysis and facilitates rapid integration along trajectories. We first give a combinatorial solution to the question of how many distinct logical structures exist for n-dimensional networks, showing that the number increases very rapidly with n. We then outline analytic methods that can be used to establish the existence, stability and periods of periodic orbits corresponding to particular cycles on the n-cube. We use these methods to confirm the existence of limit cycles discovered in a sample of a million randomly generated structures of networks of 4 genes. Even with only 4 genes, at least several hundred different patterns of stable periodic behavior are possible, many of them surprisingly complex. We discuss ways of further classifying these periodic behaviors, showing that small mutations (reversal of one or a few edges on the n-cube) need not destroy the stability of a limit cycle. Although these networks are very simple as models of gene networks, their mathematical transparency reveals relationships between structure and behavior, they suggest that the possibilities for orderly dynamics in such

  20. Steady state analysis of Boolean molecular network models via model reduction and computational algebra.

    Science.gov (United States)

    Veliz-Cuba, Alan; Aguilar, Boris; Hinkelmann, Franziska; Laubenbacher, Reinhard

    2014-06-26

    A key problem in the analysis of mathematical models of molecular networks is the determination of their steady states. The present paper addresses this problem for Boolean network models, an increasingly popular modeling paradigm for networks lacking detailed kinetic information. For small models, the problem can be solved by exhaustive enumeration of all state transitions. But for larger models this is not feasible, since the size of the phase space grows exponentially with the dimension of the network. The dimension of published models is growing to over 100, so that efficient methods for steady state determination are essential. Several methods have been proposed for large networks, some of them heuristic. While these methods represent a substantial improvement in scalability over exhaustive enumeration, the problem for large networks is still unsolved in general. This paper presents an algorithm that consists of two main parts. The first is a graph theoretic reduction of the wiring diagram of the network, while preserving all information about steady states. The second part formulates the determination of all steady states of a Boolean network as a problem of finding all solutions to a system of polynomial equations over the finite number system with two elements. This problem can be solved with existing computer algebra software. This algorithm compares favorably with several existing algorithms for steady state determination. One advantage is that it is not heuristic or reliant on sampling, but rather determines algorithmically and exactly all steady states of a Boolean network. The code for the algorithm, as well as the test suite of benchmark networks, is available upon request from the corresponding author. The algorithm presented in this paper reliably determines all steady states of sparse Boolean networks with up to 1000 nodes. The algorithm is effective at analyzing virtually all published models even those of moderate connectivity. The problem for

  1. Complex networks under dynamic repair model

    Science.gov (United States)

    Chaoqi, Fu; Ying, Wang; Kun, Zhao; Yangjun, Gao

    2018-01-01

    Invulnerability is not the only factor of importance when considering complex networks' security. It is also critical to have an effective and reasonable repair strategy. Existing research on network repair is confined to the static model. The dynamic model makes better use of the redundant capacity of repaired nodes and repairs the damaged network more efficiently than the static model; however, the dynamic repair model is complex and polytropic. In this paper, we construct a dynamic repair model and systematically describe the energy-transfer relationships between nodes in the repair process of the failure network. Nodes are divided into three types, corresponding to three structures. We find that the strong coupling structure is responsible for secondary failure of the repaired nodes and propose an algorithm that can select the most suitable targets (nodes or links) to repair the failure network with minimal cost. Two types of repair strategies are identified, with different effects under the two energy-transfer rules. The research results enable a more flexible approach to network repair.

  2. Net Rotation of the Lithosphere in Mantle Convection Models with Self-consistent Plate Generation

    Science.gov (United States)

    Gerault, M.; Coltice, N.

    2017-12-01

    Lateral variations in the viscosity structure of the lithosphere and the mantle give rise to a discordant motion between the two. In a deep mantle reference frame, this motion is called the net rotation of the lithosphere. Plate motion reconstructions, mantle flow computations, and inferences from seismic anisotropy all indicate some amount of net rotation using different mantle reference frames. While the direction of rotation is somewhat consistent across studies, the predicted amplitudes range from 0.1 deg/Myr to 0.3 deg/Myr at the present-day. How net rotation rates could have differed in the past is also a subject of debate and strong geodynamic arguments are missing from the discussion. This study provides the first net rotation calculations in 3-D spherical mantle convection models with self-consistent plate generation. We run the computations for billions of years of numerical integration. We look into how sensitive the net rotation is to major tectonic events, such as subduction initiation, continental breakup and plate reorganisations, and whether some governing principles from the models could guide plate motion reconstructions. The mantle convection problem is solved with the finite volume code StagYY using a visco-pseudo-plastic rheology. Mantle flow velocities are solely driven by buoyancy forces internal to the system, with free slip upper and lower boundary conditions. We investigate how the yield stress, the mantle viscosity structure and the properties of continents affect the net rotation over time. Models with large lateral viscosity variations from continents predict net rotations that are at least threefold faster than those without continents. Models where continents cover a third of the surface produce net rotation rates that vary from nearly zero to over 0.3 deg/Myr with rapide increase during continental breakup. The pole of rotation appears to migrate along no particular path. For all models, regardless of the yield stress and the

  3. Performance modeling, stochastic networks, and statistical multiplexing

    CERN Document Server

    Mazumdar, Ravi R

    2013-01-01

    This monograph presents a concise mathematical approach for modeling and analyzing the performance of communication networks with the aim of introducing an appropriate mathematical framework for modeling and analysis as well as understanding the phenomenon of statistical multiplexing. The models, techniques, and results presented form the core of traffic engineering methods used to design, control and allocate resources in communication networks.The novelty of the monograph is the fresh approach and insights provided by a sample-path methodology for queueing models that highlights the importan

  4. Network Modeling and Simulation A Practical Perspective

    CERN Document Server

    Guizani, Mohsen; Khan, Bilal

    2010-01-01

    Network Modeling and Simulation is a practical guide to using modeling and simulation to solve real-life problems. The authors give a comprehensive exposition of the core concepts in modeling and simulation, and then systematically address the many practical considerations faced by developers in modeling complex large-scale systems. The authors provide examples from computer and telecommunication networks and use these to illustrate the process of mapping generic simulation concepts to domain-specific problems in different industries and disciplines. Key features: Provides the tools and strate

  5. Self-consistent Random Phase Approximation applied to a schematic model of the field theory

    International Nuclear Information System (INIS)

    Bertrand, Thierry

    1998-01-01

    The self-consistent Random Phase Approximation (SCRPA) is a method allowing in the mean-field theory inclusion of the correlations in the ground and excited states. It has the advantage of not violating the Pauli principle in contrast to RPA, that is based on the quasi-bosonic approximation; in addition, numerous applications in different domains of physics, show a possible variational character. However, the latter should be formally demonstrated. The first model studied with SCRPA is the anharmonic oscillator in the region where one of its symmetries is spontaneously broken. The ground state energy is reproduced by SCRPA more accurately than RPA, with no violation of the Ritz variational principle, what is not the case for the latter approximation. The success of SCRPA is the the same in case of ground state energy for a model mixing bosons and fermions. At the transition point the SCRPA is correcting RPA drastically, but far from this region the correction becomes negligible, both methods being of similar precision. In the deformed region in the case of RPA a spurious mode occurred due to the microscopical character of the model.. The SCRPA may also reproduce this mode very accurately and actually it coincides with an excitation in the exact spectrum

  6. Self-Consistent Atmosphere Models of the Most Extreme Hot Jupiters

    Science.gov (United States)

    Lothringer, Joshua; Barman, Travis

    2018-01-01

    We present a detailed look at self-consistent PHOENIX atmosphere models of the most highly irradiated hot Jupiters known to exist. These hot Jupiters typically have equilibrium temperatures approaching and sometimes exceeding 3000 K, orbiting A, F, and early-G type stars on orbits less than 0.03 AU (10x closer than Mercury is to the Sun). The most extreme example, KELT-9b, is the hottest known hot Jupiter with a measured dayside temperature of 4600 K. Many of the planets we model have recently attracted attention with high profile discoveries, including temperature inversions in WASP-33b and WASP-121, changing phase curve offsets possibly caused by magnetohydrodymanic effects in HAT-P-7b, and TiO in WASP-19b. Our modeling provides a look at the a priori expectations for these planets and helps us understand these recent discoveries. We show that, in the hottest cases, all molecules are dissociated down to relatively high pressures. These planets may have detectable temperature inversions, more akin to thermospheres than stratospheres in that an optical absorber like TiO or VO is not needed. Instead, the inversions are created by a lack of cooling in the IR combined with heating from atoms and ions at UV and blue optical wavelengths. We also reevaluate some of the assumptions that have been made in retrieval analyses of these planets.

  7. Methodology and consistency of slant and vertical assessments for ionospheric electron content models

    Science.gov (United States)

    Hernández-Pajares, Manuel; Roma-Dollase, David; Krankowski, Andrzej; García-Rigo, Alberto; Orús-Pérez, Raül

    2017-12-01

    A summary of the main concepts on global ionospheric map(s) [hereinafter GIM(s)] of vertical total electron content (VTEC), with special emphasis on their assessment, is presented in this paper. It is based on the experience accumulated during almost two decades of collaborative work in the context of the international global navigation satellite systems (GNSS) service (IGS) ionosphere working group. A representative comparison of the two main assessments of ionospheric electron content models (VTEC-altimeter and difference of Slant TEC, based on independent global positioning system data GPS, dSTEC-GPS) is performed. It is based on 26 GPS receivers worldwide distributed and mostly placed on islands, from the last quarter of 2010 to the end of 2016. The consistency between dSTEC-GPS and VTEC-altimeter assessments for one of the most accurate IGS GIMs (the tomographic-kriging GIM `UQRG' computed by UPC) is shown. Typical error RMS values of 2 TECU for VTEC-altimeter and 0.5 TECU for dSTEC-GPS assessments are found. And, as expected by following a simple random model, there is a significant correlation between both RMS and specially relative errors, mainly evident when large enough number of observations per pass is considered. The authors expect that this manuscript will be useful for new analysis contributor centres and in general for the scientific and technical community interested in simple and truly external ways of validating electron content models of the ionosphere.

  8. Providing comprehensive and consistent access to astronomical observatory archive data: the NASA archive model

    Science.gov (United States)

    McGlynn, Thomas; Fabbiano, Giuseppina; Accomazzi, Alberto; Smale, Alan; White, Richard L.; Donaldson, Thomas; Aloisi, Alessandra; Dower, Theresa; Mazzerella, Joseph M.; Ebert, Rick; Pevunova, Olga; Imel, David; Berriman, Graham B.; Teplitz, Harry I.; Groom, Steve L.; Desai, Vandana R.; Landry, Walter

    2016-07-01

    Since the turn of the millennium a constant concern of astronomical archives have begun providing data to the public through standardized protocols unifying data from disparate physical sources and wavebands across the electromagnetic spectrum into an astronomical virtual observatory (VO). In October 2014, NASA began support for the NASA Astronomical Virtual Observatories (NAVO) program to coordinate the efforts of NASA astronomy archives in providing data to users through implementation of protocols agreed within the International Virtual Observatory Alliance (IVOA). A major goal of the NAVO collaboration has been to step back from a piecemeal implementation of IVOA standards and define what the appropriate presence for the US and NASA astronomy archives in the VO should be. This includes evaluating what optional capabilities in the standards need to be supported, the specific versions of standards that should be used, and returning feedback to the IVOA, to support modifications as needed. We discuss a standard archive model developed by the NAVO for data archive presence in the virtual observatory built upon a consistent framework of standards defined by the IVOA. Our standard model provides for discovery of resources through the VO registries, access to observation and object data, downloads of image and spectral data and general access to archival datasets. It defines specific protocol versions, minimum capabilities, and all dependencies. The model will evolve as the capabilities of the virtual observatory and needs of the community change.

  9. Modeling acquaintance networks based on balance theory

    Directory of Open Access Journals (Sweden)

    Vukašinović Vida

    2014-09-01

    Full Text Available An acquaintance network is a social structure made up of a set of actors and the ties between them. These ties change dynamically as a consequence of incessant interactions between the actors. In this paper we introduce a social network model called the Interaction-Based (IB model that involves well-known sociological principles. The connections between the actors and the strength of the connections are influenced by the continuous positive and negative interactions between the actors and, vice versa, the future interactions are more likely to happen between the actors that are connected with stronger ties. The model is also inspired by the social behavior of animal species, particularly that of ants in their colony. A model evaluation showed that the IB model turned out to be sparse. The model has a small diameter and an average path length that grows in proportion to the logarithm of the number of vertices. The clustering coefficient is relatively high, and its value stabilizes in larger networks. The degree distributions are slightly right-skewed. In the mature phase of the IB model, i.e., when the number of edges does not change significantly, most of the network properties do not change significantly either. The IB model was found to be the best of all the compared models in simulating the e-mail URV (University Rovira i Virgili of Tarragona network because the properties of the IB model more closely matched those of the e-mail URV network than the other models

  10. First results of GERDA Phase II and consistency with background models

    Science.gov (United States)

    Agostini, M.; Allardt, M.; Bakalyarov, A. M.; Balata, M.; Barabanov, I.; Baudis, L.; Bauer, C.; Bellotti, E.; Belogurov, S.; Belyaev, S. T.; Benato, G.; Bettini, A.; Bezrukov, L.; Bode1, T.; Borowicz, D.; Brudanin, V.; Brugnera, R.; Caldwell, A.; Cattadori, C.; Chernogorov, A.; D'Andrea, V.; Demidova, E. V.; Di Marco, N.; Domula, A.; Doroshkevich, E.; Egorov, V.; Falkenstein, R.; Frodyma, N.; Gangapshev, A.; Garfagnini, A.; Gooch, C.; Grabmayr, P.; Gurentsov, V.; Gusev, K.; Hakenmüller, J.; Hegai, A.; Heisel, M.; Hemmer, S.; Hofmann, W.; Hult, M.; Inzhechik, L. V.; Janicskó Csáthy, J.; Jochum, J.; Junker, M.; Kazalov, V.; Kihm, T.; Kirpichnikov, I. V.; Kirsch, A.; Kish, A.; Klimenko, A.; Kneißl, R.; Knöpfle, K. T.; Kochetov, O.; Kornoukhov, V. N.; Kuzminov, V. V.; Laubenstein, M.; Lazzaro, A.; Lebedev, V. I.; Lehnert, B.; Liao, H. Y.; Lindner, M.; Lippi, I.; Lubashevskiy, A.; Lubsandorzhiev, B.; Lutter, G.; Macolino, C.; Majorovits, B.; Maneschg, W.; Medinaceli, E.; Miloradovic, M.; Mingazheva, R.; Misiaszek, M.; Moseev, P.; Nemchenok, I.; Palioselitis, D.; Panas, K.; Pandola, L.; Pelczar, K.; Pullia, A.; Riboldi, S.; Rumyantseva, N.; Sada, C.; Salamida, F.; Salathe, M.; Schmitt, C.; Schneider, B.; Schönert, S.; Schreiner, J.; Schulz, O.; Schütz, A.-K.; Schwingenheuer, B.; Selivanenko, O.; Shevzik, E.; Shirchenko, M.; Simgen, H.; Smolnikov, A.; Stanco, L.; Vanhoefer, L.; Vasenko, A. A.; Veresnikova, A.; von Sturm, K.; Wagner, V.; Wegmann, A.; Wester, T.; Wiesinger, C.; Wojcik, M.; Yanovich, E.; Zhitnikov, I.; Zhukov, S. V.; Zinatulina, D.; Zuber, K.; Zuzel, G.

    2017-01-01

    The GERDA (GERmanium Detector Array) is an experiment for the search of neutrinoless double beta decay (0νββ) in 76Ge, located at Laboratori Nazionali del Gran Sasso of INFN (Italy). GERDA operates bare high purity germanium detectors submersed in liquid Argon (LAr). Phase II of data-taking started in Dec 2015 and is currently ongoing. In Phase II 35 kg of germanium detectors enriched in 76Ge including thirty newly produced Broad Energy Germanium (BEGe) detectors is operating to reach an exposure of 100 kg·yr within about 3 years data taking. The design goal of Phase II is to reduce the background by one order of magnitude to get the sensitivity for T1/20ν = O≤ft( {{{10}26}} \\right){{ yr}}. To achieve the necessary background reduction, the setup was complemented with LAr veto. Analysis of the background spectrum of Phase II demonstrates consistency with the background models. Furthermore 226Ra and 232Th contamination levels consistent with screening results. In the first Phase II data release we found no hint for a 0νββ decay signal and place a limit of this process T1/20ν > 5.3 \\cdot {1025} yr (90% C.L., sensitivity 4.0·1025 yr). First results of GERDA Phase II will be presented.

  11. The self-consistent field model for Fermi systems with account of three-body interactions

    Directory of Open Access Journals (Sweden)

    Yu.M. Poluektov

    2015-12-01

    Full Text Available On the basis of a microscopic model of self-consistent field, the thermodynamics of the many-particle Fermi system at finite temperatures with account of three-body interactions is built and the quasiparticle equations of motion are obtained. It is shown that the delta-like three-body interaction gives no contribution into the self-consistent field, and the description of three-body forces requires their nonlocality to be taken into account. The spatially uniform system is considered in detail, and on the basis of the developed microscopic approach general formulas are derived for the fermion's effective mass and the system's equation of state with account of contribution from three-body forces. The effective mass and pressure are numerically calculated for the potential of "semi-transparent sphere" type at zero temperature. Expansions of the effective mass and pressure in powers of density are obtained. It is shown that, with account of only pair forces, the interaction of repulsive character reduces the quasiparticle effective mass relative to the mass of a free particle, and the attractive interaction raises the effective mass. The question of thermodynamic stability of the Fermi system is considered and the three-body repulsive interaction is shown to extend the region of stability of the system with the interparticle pair attraction. The quasiparticle energy spectrum is calculated with account of three-body forces.

  12. Height-Diameter Models for Mixed-Species Forests Consisting of Spruce, Fir, and Beech

    Directory of Open Access Journals (Sweden)

    Petráš Rudolf

    2014-06-01

    Full Text Available Height-diameter models define the general relationship between the tree height and diameter at each growth stage of the forest stand. This paper presents generalized height-diameter models for mixed-species forest stands consisting of Norway spruce (Picea abies Karst., Silver fir (Abies alba L., and European beech (Fagus sylvatica L. from Slovakia. The models were derived using two growth functions from the exponential family: the two-parameter Michailoff and three-parameter Korf functions. Generalized height-diameter functions must normally be constrained to pass through the mean stand diameter and height, and then the final growth model has only one or two parameters to be estimated. These “free” parameters are then expressed over the quadratic mean diameter, height and stand age and the final mathematical form of the model is obtained. The study material included 50 long-term experimental plots located in the Western Carpathians. The plots were established 40-50 years ago and have been repeatedly measured at 5 to 10-year intervals. The dataset includes 7,950 height measurements of spruce, 21,661 of fir and 5,794 of beech. As many as 9 regression models were derived for each species. Although the “goodness of fit” of all models showed that they were generally well suited for the data, the best results were obtained for silver fir. The coefficient of determination ranged from 0.946 to 0.948, RMSE (m was in the interval 1.94-1.97 and the bias (m was -0.031 to 0.063. Although slightly imprecise parameter estimation was established for spruce, the estimations of the regression parameters obtained for beech were quite less precise. The coefficient of determination for beech was 0.854-0.860, RMSE (m 2.67-2.72, and the bias (m ranged from -0.144 to -0.056. The majority of models using Korf’s formula produced slightly better estimations than Michailoff’s, and it proved immaterial which estimated parameter was fixed and which parameters

  13. Geometry and time scales of self-consistent orbits in a modified SU(2) model

    International Nuclear Information System (INIS)

    Jezek, D.M.; Hernandez, E.S.; Solari, H.G.

    1986-01-01

    We investigate the time-dependent Hartree-Fock flow pattern of a two-level many fermion system interacting via a two-body interaction which does not preserve the parity symmetry of standard SU(2) models. The geometrical features of the time-dependent Hartree-Fock energy surface are analyzed and a phase instability is clearly recognized. The time evolution of one-body observables along self-consistent and exact trajectories are examined together with the overlaps between both orbits. Typical time scales for the determinantal motion can be set and the validity of the time-dependent Hartree-Fock approach in the various regions of quasispin phase space is discussed

  14. Self-consistent model of the Rayleigh--Taylor instability in ablatively accelerated laser plasma

    International Nuclear Information System (INIS)

    Bychkov, V.V.; Golberg, S.M.; Liberman, M.A.

    1994-01-01

    A self-consistent approach to the problem of the growth rate of the Rayleigh--Taylor instability in laser accelerated targets is developed. The analytical solution of the problem is obtained by solving the complete system of the hydrodynamical equations which include both thermal conductivity and energy release due to absorption of the laser light. The developed theory provides a rigorous justification for the supplementary boundary condition in the limiting case of the discontinuity model. An analysis of the suppression of the Rayleigh--Taylor instability by the ablation flow is done and it is found that there is a good agreement between the obtained solution and the approximate formula σ = 0.9√gk - 3u 1 k, where g is the acceleration, u 1 is the ablation velocity. This paper discusses different regimes of the ablative stabilization and compares them with previous analytical and numerical works

  15. Self-consistent finite-temperature model of atom-laser coherence properties

    International Nuclear Information System (INIS)

    Fergusson, J.R.; Geddes, A.J.; Hutchinson, D.A.W.

    2005-01-01

    We present a mean-field model of a continuous-wave atom laser with Raman output coupling. The noncondensate is pumped at a fixed input rate which, in turn, pumps the condensate through a two-body scattering process obeying the Fermi golden rule. The gas is then coupled out by a Gaussian beam from the system, and the temperature and particle number are self-consistently evaluated against equilibrium constraints. We observe the dependence of the second-order coherence of the output upon the width of the output-coupling beam, and note that even in the presence of a highly coherent trapped gas, perfect coherence of the output matter wave is not guaranteed

  16. Homogenization of linearly anisotropic scattering cross sections in a consistent B1 heterogeneous leakage model

    International Nuclear Information System (INIS)

    Marleau, G.; Debos, E.

    1998-01-01

    One of the main problems encountered in cell calculations is that of spatial homogenization where one associates to an heterogeneous cell an homogeneous set of cross sections. The homogenization process is in fact trivial when a totally reflected cell without leakage is fully homogenized since it involved only a flux-volume weighting of the isotropic cross sections. When anisotropic leakages models are considered, in addition to homogenizing isotropic cross sections, the anisotropic scattering cross section must also be considered. The simple option, which consists of using the same homogenization procedure for both the isotropic and anisotropic components of the scattering cross section, leads to inconsistencies between the homogeneous and homogenized transport equation. Here we will present a method for homogenizing the anisotropic scattering cross sections that will resolve these inconsistencies. (author)

  17. Optimal transportation networks models and theory

    CERN Document Server

    Bernot, Marc; Morel, Jean-Michel

    2009-01-01

    The transportation problem can be formalized as the problem of finding the optimal way to transport a given measure into another with the same mass. In contrast to the Monge-Kantorovitch problem, recent approaches model the branched structure of such supply networks as minima of an energy functional whose essential feature is to favour wide roads. Such a branched structure is observable in ground transportation networks, in draining and irrigation systems, in electrical power supply systems and in natural counterparts such as blood vessels or the branches of trees. These lectures provide mathematical proof of several existence, structure and regularity properties empirically observed in transportation networks. The link with previous discrete physical models of irrigation and erosion models in geomorphology and with discrete telecommunication and transportation models is discussed. It will be mathematically proven that the majority fit in the simple model sketched in this volume.

  18. Flood routing modelling with Artificial Neural Networks

    Directory of Open Access Journals (Sweden)

    R. Peters

    2006-01-01

    Full Text Available For the modelling of the flood routing in the lower reaches of the Freiberger Mulde river and its tributaries the one-dimensional hydrodynamic modelling system HEC-RAS has been applied. Furthermore, this model was used to generate a database to train multilayer feedforward networks. To guarantee numerical stability for the hydrodynamic modelling of some 60 km of streamcourse an adequate resolution in space requires very small calculation time steps, which are some two orders of magnitude smaller than the input data resolution. This leads to quite high computation requirements seriously restricting the application – especially when dealing with real time operations such as online flood forecasting. In order to solve this problem we tested the application of Artificial Neural Networks (ANN. First studies show the ability of adequately trained multilayer feedforward networks (MLFN to reproduce the model performance.

  19. Linear approximation model network and its formation via ...

    Indian Academy of Sciences (India)

    To overcome the deficiency of `local model network' (LMN) techniques, an alternative `linear approximation model' (LAM) network approach is proposed. Such a network models a nonlinear or practical system with multiple linear models fitted along operating trajectories, where individual models are simply networked ...

  20. The Consistent Kinetics Porosity (CKP) Model: A Theory for the Mechanical Behavior of Moderately Porous Solids

    Energy Technology Data Exchange (ETDEWEB)

    BRANNON,REBECCA M.

    2000-11-01

    A theory is developed for the response of moderately porous solids (no more than {approximately}20% void space) to high-strain-rate deformations. The model is consistent because each feature is incorporated in a manner that is mathematically compatible with the other features. Unlike simple p-{alpha} models, the onset of pore collapse depends on the amount of shear present. The user-specifiable yield function depends on pressure, effective shear stress, and porosity. The elastic part of the strain rate is linearly related to the stress rate, with nonlinear corrections from changes in the elastic moduli due to pore collapse. Plastically incompressible flow of the matrix material allows pore collapse and an associated macroscopic plastic volume change. The plastic strain rate due to pore collapse/growth is taken normal to the yield surface. If phase transformation and/or pore nucleation are simultaneously occurring, the inelastic strain rate will be non-normal to the yield surface. To permit hardening, the yield stress of matrix material is treated as an internal state variable. Changes in porosity and matrix yield stress naturally cause the yield surface to evolve. The stress, porosity, and all other state variables vary in a consistent manner so that the stress remains on the yield surface throughout any quasistatic interval of plastic deformation. Dynamic loading allows the stress to exceed the yield surface via an overstress ordinary differential equation that is solved in closed form for better numerical accuracy. The part of the stress rate that causes no plastic work (i.e-, the part that has a zero inner product with the stress deviator and the identity tensor) is given by the projection of the elastic stressrate orthogonal to the span of the stress deviator and the identity tensor.The model, which has been numerically implemented in MIG format, has been exercised under a wide array of extremal loading and unloading paths. As will be discussed in a companion

  1. Modeling Security Aspects of Network

    Science.gov (United States)

    Schoch, Elmar

    With more and more widespread usage of computer systems and networks, dependability becomes a paramount requirement. Dependability typically denotes tolerance or protection against all kinds of failures, errors and faults. Sources of failures can basically be accidental, e.g., in case of hardware errors or software bugs, or intentional due to some kind of malicious behavior. These intentional, malicious actions are subject of security. A more complete overview on the relations between dependability and security can be found in [31]. In parallel to the increased use of technology, misuse also has grown significantly, requiring measures to deal with it.

  2. Modeling and optimization of an electric power distribution network ...

    African Journals Online (AJOL)

    Modeling and optimization of an electric power distribution network planning system using ... of the network was modelled with non-linear mathematical expressions. ... given feasible locations, re-conductoring of existing feeders in the network, ...

  3. An evolving network model with modular growth

    International Nuclear Information System (INIS)

    Zou Zhi-Yun; Liu Peng; Lei Li; Gao Jian-Zhi

    2012-01-01

    In this paper, we propose an evolving network model growing fast in units of module, according to the analysis of the evolution characteristics in real complex networks. Each module is a small-world network containing several interconnected nodes and the nodes between the modules are linked by preferential attachment on degree of nodes. We study the modularity measure of the proposed model, which can be adjusted by changing the ratio of the number of inner-module edges and the number of inter-module edges. In view of the mean-field theory, we develop an analytical function of the degree distribution, which is verified by a numerical example and indicates that the degree distribution shows characteristics of the small-world network and the scale-free network distinctly at different segments. The clustering coefficient and the average path length of the network are simulated numerically, indicating that the network shows the small-world property and is affected little by the randomness of the new module. (interdisciplinary physics and related areas of science and technology)

  4. A Network Model of Credit Risk Contagion

    Directory of Open Access Journals (Sweden)

    Ting-Qiang Chen

    2012-01-01

    Full Text Available A network model of credit risk contagion is presented, in which the effect of behaviors of credit risk holders and the financial market regulators and the network structure are considered. By introducing the stochastic dominance theory, we discussed, respectively, the effect mechanisms of the degree of individual relationship, individual attitude to credit risk contagion, the individual ability to resist credit risk contagion, the monitoring strength of the financial market regulators, and the network structure on credit risk contagion. Then some derived and proofed propositions were verified through numerical simulations.

  5. Self consistent solution of the tJ model in the overdoped regime

    Science.gov (United States)

    Shastry, B. Sriram; Hansen, Daniel

    2013-03-01

    Detailed results from a recent microscopic theory of extremely correlated Fermi liquids, applied to the t-J model in two dimensions, are presented. The theory is to second order in a parameter λ, and is valid in the overdoped regime of the tJ model. The solution reported here is from Ref, where relevant equations given in Ref are self consistently solved for the square lattice. Thermodynamic variables and the resistivity are displayed at various densities and T for two sets of band parameters. The momentum distribution function and the renormalized electronic dispersion, its width and asymmetry are reported along principal directions of the zone. The optical conductivity is calculated. The electronic spectral function A (k , ω) probed in ARPES, is detailed with different elastic scattering parameters to account for the distinction between LASER and synchrotron ARPES. A high (binding) energy waterfall feature, sensitively dependent on the band hopping parameter t' is noted. This work was supported by DOE under Grant No. FG02-06ER46319.

  6. Study of impurity effects on CFETR steady-state scenario by self-consistent integrated modeling

    Science.gov (United States)

    Shi, Nan; Chan, Vincent S.; Jian, Xiang; Li, Guoqiang; Chen, Jiale; Gao, Xiang; Shi, Shengyu; Kong, Defeng; Liu, Xiaoju; Mao, Shifeng; Xu, Guoliang

    2017-12-01

    Impurity effects on fusion performance of China fusion engineering test reactor (CFETR) due to extrinsic seeding are investigated. An integrated 1.5D modeling workflow evolves plasma equilibrium and all transport channels to steady state. The one modeling framework for integrated tasks framework is used to couple the transport solver, MHD equilibrium solver, and source and sink calculations. A self-consistent impurity profile constructed using a steady-state background plasma, which satisfies quasi-neutrality and true steady state, is presented for the first time. Studies are performed based on an optimized fully non-inductive scenario with varying concentrations of Argon (Ar) seeding. It is found that fusion performance improves before dropping off with increasing {{Z}\\text{eff}} , while the confinement remains at high level. Further analysis of transport for these plasmas shows that low-k ion temperature gradient modes dominate the turbulence. The decrease in linear growth rate and resultant fluxes of all channels with increasing {{Z}\\text{eff}} can be traced to impurity profile change by transport. The improvement in confinement levels off at higher {{Z}\\text{eff}} . Over the regime of study there is a competition between the suppressed transport and increasing radiation that leads to a peak in the fusion performance at {{Z}\\text{eff}} (~2.78 for CFETR). Extrinsic impurity seeding to control divertor heat load will need to be optimized around this value for best fusion performance.

  7. Modeling of LH current drive in self-consistent elongated tokamak MHD equilibria

    International Nuclear Information System (INIS)

    Blackfield, D.T.; Devoto, R.S.; Fenstermacher, M.E.; Bonoli, P.T.; Porkolab, M.; Yugo, J.

    1989-01-01

    Calculations of non-inductive current drive typically have been used with model MHD equilibria which are independently generated from an assumed toroidal current profile or from a fit to an experiment. Such a method can lead to serious errors since the driven current can dramatically alter the equilibrium and changes in the equilibrium B-fields can dramatically alter the current drive. The latter effect is quite pronounced in LH current drive where the ray trajectories are sensitive to the local values of the magnetic shear and the density gradient. In order to overcome these problems, we have modified a LH simulation code to accommodate elongated plasmas with numerically generated equilibria. The new LH module has been added to the ACCOME code which solves for current drive by neutral beams, electric fields, and bootstrap effects in a self-consistent 2-D equilibrium. We briefly describe the model in the next section and then present results of a study of LH current drive in ITER. 2 refs., 6 figs., 2 tabs

  8. Linking lipid architecture to bilayer structure and mechanics using self-consistent field modelling

    International Nuclear Information System (INIS)

    Pera, H.; Kleijn, J. M.; Leermakers, F. A. M.

    2014-01-01

    To understand how lipid architecture determines the lipid bilayer structure and its mechanics, we implement a molecularly detailed model that uses the self-consistent field theory. This numerical model accurately predicts parameters such as Helfrichs mean and Gaussian bending modulus k c and k ¯ and the preferred monolayer curvature J 0 m , and also delivers structural membrane properties like the core thickness, and head group position and orientation. We studied how these mechanical parameters vary with system variations, such as lipid tail length, membrane composition, and those parameters that control the lipid tail and head group solvent quality. For the membrane composition, negatively charged phosphatidylglycerol (PG) or zwitterionic, phosphatidylcholine (PC), and -ethanolamine (PE) lipids were used. In line with experimental findings, we find that the values of k c and the area compression modulus k A are always positive. They respond similarly to parameters that affect the core thickness, but differently to parameters that affect the head group properties. We found that the trends for k ¯ and J 0 m can be rationalised by the concept of Israelachivili's surfactant packing parameter, and that both k ¯ and J 0 m change sign with relevant parameter changes. Although typically k ¯ 0 m ≫0, especially at low ionic strengths. We anticipate that these changes lead to unstable membranes as these become vulnerable to pore formation or disintegration into lipid disks

  9. Self consistent MHD modeling of the solar wind from polar coronal holes

    International Nuclear Information System (INIS)

    Stewart, G. A.; Bravo, S.

    1996-01-01

    We have developed a 2D self consistent MHD model for solar wind flow from antisymmetric magnetic geometries. We present results in the case of a photospheric magnetic field which has a dipolar configuration, in order to investigate some of the general characteristics of the wind at solar minimum. As in previous studies, we find that the magnetic configuration is that of a closed field region (a coronal helmet belt) around the solar equator, extending up to about 1.6 R · , and two large open field regions centred over the poles (polar coronal holes), whose magnetic and plasma fluxes expand to fill both hemispheres in interplanetary space. In addition, we find that the different geometries of the magnetic field lines across each hole (from the almost radial central polar lines to the highly curved border equatorial lines) cause the solar wind to have greatly different properties depending on which region it flows from. We find that, even though our simplified model cannot produce realistic wind values, we can obtain a polar wind that is faster, less dense and hotter than equatorial wind, and found that, close to the Sun, there exists a sharp transition between the two wind types. As these characteristics coincide with observations we conclude that both fast and slow solar wind can originate from coronal holes, fast wind from the centre, slow wind from the border

  10. Quantum self-consistency of AdSxΣ brane models

    International Nuclear Information System (INIS)

    Flachi, Antonino; Pujolas, Oriol

    2003-01-01

    Continuing our previous work, we consider a class of higher dimensional brane models with the topology of AdS D 1 +1 xΣ, where Σ is a one-parameter compact manifold and two branes of codimension one are located at the orbifold fixed points. We consider a setup where such a solution arises from Einstein-Yang-Mills theory and evaluate the one-loop effective potential induced by gauge fields and by a generic bulk scalar field. We show that this type of brane model resolves the gauge hierarchy between the Planck and electroweak scales through redshift effects due to the warp factor a=e -πkr . The value of a is then fixed by minimizing the effective potential. We find that, as in the Randall-Sundrum case, the gauge field contribution to the effective potential stabilizes the hierarchy without fine-tuning as long as the Laplacian Δ Σ on Σ has a zero eigenvalue. Scalar fields can stabilize the hierarchy depending on the mass and the nonminimal coupling. We also address the quantum self-consistency of the solution, showing that the classical brane solution is not spoiled by quantum effects

  11. The International Trade Network: weighted network analysis and modelling

    International Nuclear Information System (INIS)

    Bhattacharya, K; Mukherjee, G; Manna, S S; Saramäki, J; Kaski, K

    2008-01-01

    Tools of the theory of critical phenomena, namely the scaling analysis and universality, are argued to be applicable to large complex web-like network structures. Using a detailed analysis of the real data of the International Trade Network we argue that the scaled link weight distribution has an approximate log-normal distribution which remains robust over a period of 53 years. Another universal feature is observed in the power-law growth of the trade strength with gross domestic product, the exponent being similar for all countries. Using the 'rich-club' coefficient measure of the weighted networks it has been shown that the size of the rich-club controlling half of the world's trade is actually shrinking. While the gravity law is known to describe well the social interactions in the static networks of population migration, international trade, etc, here for the first time we studied a non-conservative dynamical model based on the gravity law which excellently reproduced many empirical features of the ITN

  12. Keystone Business Models for Network Security Processors

    OpenAIRE

    Arthur Low; Steven Muegge

    2013-01-01

    Network security processors are critical components of high-performance systems built for cybersecurity. Development of a network security processor requires multi-domain experience in semiconductors and complex software security applications, and multiple iterations of both software and hardware implementations. Limited by the business models in use today, such an arduous task can be undertaken only by large incumbent companies and government organizations. Neither the “fabless semiconductor...

  13. Stochastic modeling and analysis of telecoms networks

    CERN Document Server

    Decreusefond, Laurent

    2012-01-01

    This book addresses the stochastic modeling of telecommunication networks, introducing the main mathematical tools for that purpose, such as Markov processes, real and spatial point processes and stochastic recursions, and presenting a wide list of results on stability, performances and comparison of systems.The authors propose a comprehensive mathematical construction of the foundations of stochastic network theory: Markov chains, continuous time Markov chains are extensively studied using an original martingale-based approach. A complete presentation of stochastic recursions from an

  14. Decomposed Implicit Models of Piecewise - Linear Networks

    Directory of Open Access Journals (Sweden)

    J. Brzobohaty

    1992-05-01

    Full Text Available The general matrix form of the implicit description of a piecewise-linear (PWL network and the symbolic block diagram of the corresponding circuit model are proposed. Their decomposed forms enable us to determine quite separately the existence of the individual breakpoints of the resultant PWL characteristic and their coordinates using independent network parameters. For the two-diode and three-diode cases all the attainable types of the PWL characteristic are introduced.

  15. Artificial Immune Networks: Models and Applications

    Directory of Open Access Journals (Sweden)

    Xian Shen

    2008-06-01

    Full Text Available Artificial Immune Systems (AIS, which is inspired by the nature immune system, has been applied for solving complex computational problems in classification, pattern rec- ognition, and optimization. In this paper, the theory of the natural immune system is first briefly introduced. Next, we compare some well-known AIS and their applications. Several representative artificial immune networks models are also dis- cussed. Moreover, we demonstrate the applications of artificial immune networks in various engineering fields.

  16. Continuum Modeling of Biological Network Formation

    KAUST Repository

    Albi, Giacomo

    2017-04-10

    We present an overview of recent analytical and numerical results for the elliptic–parabolic system of partial differential equations proposed by Hu and Cai, which models the formation of biological transportation networks. The model describes the pressure field using a Darcy type equation and the dynamics of the conductance network under pressure force effects. Randomness in the material structure is represented by a linear diffusion term and conductance relaxation by an algebraic decay term. We first introduce micro- and mesoscopic models and show how they are connected to the macroscopic PDE system. Then, we provide an overview of analytical results for the PDE model, focusing mainly on the existence of weak and mild solutions and analysis of the steady states. The analytical part is complemented by extensive numerical simulations. We propose a discretization based on finite elements and study the qualitative properties of network structures for various parameter values.

  17. Adaptive-network models of collective dynamics

    Science.gov (United States)

    Zschaler, G.

    2012-09-01

    Complex systems can often be modelled as networks, in which their basic units are represented by abstract nodes and the interactions among them by abstract links. This network of interactions is the key to understanding emergent collective phenomena in such systems. In most cases, it is an adaptive network, which is defined by a feedback loop between the local dynamics of the individual units and the dynamical changes of the network structure itself. This feedback loop gives rise to many novel phenomena. Adaptive networks are a promising concept for the investigation of collective phenomena in different systems. However, they also present a challenge to existing modelling approaches and analytical descriptions due to the tight coupling between local and topological degrees of freedom. In this work, which is essentially my PhD thesis, I present a simple rule-based framework for the investigation of adaptive networks, using which a wide range of collective phenomena can be modelled and analysed from a common perspective. In this framework, a microscopic model is defined by the local interaction rules of small network motifs, which can be implemented in stochastic simulations straightforwardly. Moreover, an approximate emergent-level description in terms of macroscopic variables can be derived from the microscopic rules, which we use to analyse the system's collective and long-term behaviour by applying tools from dynamical systems theory. We discuss three adaptive-network models for different collective phenomena within our common framework. First, we propose a novel approach to collective motion in insect swarms, in which we consider the insects' adaptive interaction network instead of explicitly tracking their positions and velocities. We capture the experimentally observed onset of collective motion qualitatively in terms of a bifurcation in this non-spatial model. We find that three-body interactions are an essential ingredient for collective motion to emerge

  18. Network Design Models for Container Shipping

    DEFF Research Database (Denmark)

    Reinhardt, Line Blander; Kallehauge, Brian; Nielsen, Anders Nørrelund

    This paper presents a study of the network design problem in container shipping. The paper combines the network design and fleet assignment problem into a mixed integer linear programming model minimizing the overall cost. The major contributions of this paper is that the time of a vessel route...... is included in the calculation of the capacity and that a inhomogeneous fleet is modeled. The model also includes the cost of transshipment which is one of the major cost for the shipping companies. The concept of pseudo simple routes is introduced to expand the set of feasible routes. The linearization...

  19. Biochemical Network Stochastic Simulator (BioNetS: software for stochastic modeling of biochemical networks

    Directory of Open Access Journals (Sweden)

    Elston Timothy C

    2004-03-01

    Full Text Available Abstract Background Intrinsic fluctuations due to the stochastic nature of biochemical reactions can have large effects on the response of biochemical networks. This is particularly true for pathways that involve transcriptional regulation, where generally there are two copies of each gene and the number of messenger RNA (mRNA molecules can be small. Therefore, there is a need for computational tools for developing and investigating stochastic models of biochemical networks. Results We have developed the software package Biochemical Network Stochastic Simulator (BioNetS for efficientlyand accurately simulating stochastic models of biochemical networks. BioNetS has a graphical user interface that allows models to be entered in a straightforward manner, and allows the user to specify the type of random variable (discrete or continuous for each chemical species in the network. The discrete variables are simulated using an efficient implementation of the Gillespie algorithm. For the continuous random variables, BioNetS constructs and numerically solvesthe appropriate chemical Langevin equations. The software package has been developed to scale efficiently with network size, thereby allowing large systems to be studied. BioNetS runs as a BioSpice agent and can be downloaded from http://www.biospice.org. BioNetS also can be run as a stand alone package. All the required files are accessible from http://x.amath.unc.edu/BioNetS. Conclusions We have developed BioNetS to be a reliable tool for studying the stochastic dynamics of large biochemical networks. Important features of BioNetS are its ability to handle hybrid models that consist of both continuous and discrete random variables and its ability to model cell growth and division. We have verified the accuracy and efficiency of the numerical methods by considering several test systems.

  20. Self-consistent model of the low-latitude boundary layer

    International Nuclear Information System (INIS)

    Phan, T.D.; Sonnerup, B.U.Oe.; Lotko, W.

    1989-01-01

    A simple two-dimensional, steady state, viscous model of the dawnside and duskside low-latitude boundary layer (LLBL) has been developed. It incorporates coupling to the ionosphere via field-aligned currents and associated field-aligned potential drops, governed by a simple conductance law, and it describes boundary layer currents, magnetic fields, and plasma flow in a self-consistent manner. The magnetic field induced by these currents leads to two effects: (1) a diamagnetic depression of the magnetic field in the equatorial region and (2) bending of the field lines into parabolas in the xz plane with their vertices in the equatorial plane, at z = 0, and pointing in the flow direction, i.e., tailward. Both effects are strongest at the magnetopause edge of the boundary layer and vanish at the magnetospheric edge. The diamagnetic depression corresponds to an excess of plasma pressure in the equatorial boundary layer near the magnetopause. The boundary layer structure is governed by a fourth-order, nonlinear, ordinary differential equation in which one nondimensional parameter, the Hartmann number M, appears. A second parameter, introduced via the boundary conditions, is a nondimensional flow velocity v 0 * at the magnetopause. Numerical results from the model are presented and the possible use of observations to determine the model parameters is discussed. The main new contribution of the study is to provide a better description of the field and plasma configuration in the LLBL itself and to clarify in quantitative terms the circumstances in which induced magnetic fields become important

  1. Characterization and Modeling of Network Traffic

    DEFF Research Database (Denmark)

    Shawky, Ahmed; Bergheim, Hans; Ragnarsson, Olafur

    2011-01-01

    -arrival time, IP addresses, port numbers and transport protocol are the only necessary parameters to model network traffic behaviour. In order to recreate this behaviour, a complex model is needed which is able to recreate traffic behaviour based on a set of statistics calculated from the parameters values...

  2. Electron beam charging of insulators: A self-consistent flight-drift model

    International Nuclear Information System (INIS)

    Touzin, M.; Goeuriot, D.; Guerret-Piecourt, C.; Juve, D.; Treheux, D.; Fitting, H.-J.

    2006-01-01

    Electron beam irradiation and the self-consistent charge transport in bulk insulating samples are described by means of a new flight-drift model and an iterative computer simulation. Ballistic secondary electron and hole transport is followed by electron and hole drifts, their possible recombination and/or trapping in shallow and deep traps. The trap capture cross sections are the Poole-Frenkel-type temperature and field dependent. As a main result the spatial distributions of currents j(x,t), charges ρ(x,t), the field F(x,t), and the potential slope V(x,t) are obtained in a self-consistent procedure as well as the time-dependent secondary electron emission rate σ(t) and the surface potential V 0 (t). For bulk insulating samples the time-dependent distributions approach the final stationary state with j(x,t)=const=0 and σ=1. Especially for low electron beam energies E 0 G of a vacuum grid in front of the target surface. For high beam energies E 0 =10, 20, and 30 keV high negative surface potentials V 0 =-4, -14, and -24 kV are obtained, respectively. Besides open nonconductive samples also positive ion-covered samples and targets with a conducting and grounded layer (metal or carbon) on the surface have been considered as used in environmental scanning electron microscopy and common SEM in order to prevent charging. Indeed, the potential distributions V(x) are considerably small in magnitude and do not affect the incident electron beam neither by retarding field effects in front of the surface nor within the bulk insulating sample. Thus the spatial scattering and excitation distributions are almost not affected

  3. Agent based modeling of energy networks

    International Nuclear Information System (INIS)

    Gonzalez de Durana, José María; Barambones, Oscar; Kremers, Enrique; Varga, Liz

    2014-01-01

    Highlights: • A new approach for energy network modeling is designed and tested. • The agent-based approach is general and no technology dependent. • The models can be easily extended. • The range of applications encompasses from small to large energy infrastructures. - Abstract: Attempts to model any present or future power grid face a huge challenge because a power grid is a complex system, with feedback and multi-agent behaviors, integrated by generation, distribution, storage and consumption systems, using various control and automation computing systems to manage electricity flows. Our approach to modeling is to build upon an established model of the low voltage electricity network which is tested and proven, by extending it to a generalized energy model. But, in order to address the crucial issues of energy efficiency, additional processes like energy conversion and storage, and further energy carriers, such as gas, heat, etc., besides the traditional electrical one, must be considered. Therefore a more powerful model, provided with enhanced nodes or conversion points, able to deal with multidimensional flows, is being required. This article addresses the issue of modeling a local multi-carrier energy network. This problem can be considered as an extension of modeling a low voltage distribution network located at some urban or rural geographic area. But instead of using an external power flow analysis package to do the power flow calculations, as used in electric networks, in this work we integrate a multiagent algorithm to perform the task, in a concurrent way to the other simulation tasks, and not only for the electric fluid but also for a number of additional energy carriers. As the model is mainly focused in system operation, generation and load models are not developed

  4. Towards three-dimensional continuum models of self-consistent along-strike megathrust segmentation

    Science.gov (United States)

    Pranger, Casper; van Dinther, Ylona; May, Dave; Le Pourhiet, Laetitia; Gerya, Taras

    2016-04-01

    into one algorithm. We are working towards presenting the first benchmarked 3D dynamic rupture models as an important step towards seismic cycle modelling of megathrust segmentation in a three-dimensional subduction setting with slow tectonic loading, self consistent fault development, and spontaneous seismicity.

  5. Achieving Consistent Near-Optimal Pattern Recognition Accuracy Using Particle Swarm Optimization to Pre-Train Artificial Neural Networks

    Science.gov (United States)

    Nikelshpur, Dmitry O.

    2014-01-01

    Similar to mammalian brains, Artificial Neural Networks (ANN) are universal approximators, capable of yielding near-optimal solutions to a wide assortment of problems. ANNs are used in many fields including medicine, internet security, engineering, retail, robotics, warfare, intelligence control, and finance. "ANNs have a tendency to get…

  6. Delay and Disruption Tolerant Networking MACHETE Model

    Science.gov (United States)

    Segui, John S.; Jennings, Esther H.; Gao, Jay L.

    2011-01-01

    To verify satisfaction of communication requirements imposed by unique missions, as early as 2000, the Communications Networking Group at the Jet Propulsion Laboratory (JPL) saw the need for an environment to support interplanetary communication protocol design, validation, and characterization. JPL's Multi-mission Advanced Communications Hybrid Environment for Test and Evaluation (MACHETE), described in Simulator of Space Communication Networks (NPO-41373) NASA Tech Briefs, Vol. 29, No. 8 (August 2005), p. 44, combines various commercial, non-commercial, and in-house custom tools for simulation and performance analysis of space networks. The MACHETE environment supports orbital analysis, link budget analysis, communications network simulations, and hardware-in-the-loop testing. As NASA is expanding its Space Communications and Navigation (SCaN) capabilities to support planned and future missions, building infrastructure to maintain services and developing enabling technologies, an important and broader role is seen for MACHETE in design-phase evaluation of future SCaN architectures. To support evaluation of the developing Delay Tolerant Networking (DTN) field and its applicability for space networks, JPL developed MACHETE models for DTN Bundle Protocol (BP) and Licklider/Long-haul Transmission Protocol (LTP). DTN is an Internet Research Task Force (IRTF) architecture providing communication in and/or through highly stressed networking environments such as space exploration and battlefield networks. Stressed networking environments include those with intermittent (predictable and unknown) connectivity, large and/or variable delays, and high bit error rates. To provide its services over existing domain specific protocols, the DTN protocols reside at the application layer of the TCP/IP stack, forming a store-and-forward overlay network. The key capabilities of the Bundle Protocol include custody-based reliability, the ability to cope with intermittent connectivity

  7. An artificial neural network model for periodic trajectory generation

    Science.gov (United States)

    Shankar, S.; Gander, R. E.; Wood, H. C.

    A neural network model based on biological systems was developed for potential robotic application. The model consists of three interconnected layers of artificial neurons or units: an input layer subdivided into state and plan units, an output layer, and a hidden layer between the two outer layers which serves to implement nonlinear mappings between the input and output activation vectors. Weighted connections are created between the three layers, and learning is effected by modifying these weights. Feedback connections between the output and the input state serve to make the network operate as a finite state machine. The activation vector of the plan units of the input layer emulates the supraspinal commands in biological central pattern generators in that different plan activation vectors correspond to different sequences or trajectories being recalled, even with different frequencies. Three trajectories were chosen for implementation, and learning was accomplished in 10,000 trials. The fault tolerant behavior, adaptiveness, and phase maintenance of the implemented network are discussed.

  8. A comprehensive Network Security Risk Model for process control networks.

    Science.gov (United States)

    Henry, Matthew H; Haimes, Yacov Y

    2009-02-01

    The risk of cyber attacks on process control networks (PCN) is receiving significant attention due to the potentially catastrophic extent to which PCN failures can damage the infrastructures and commodity flows that they support. Risk management addresses the coupled problems of (1) reducing the likelihood that cyber attacks would succeed in disrupting PCN operation and (2) reducing the severity of consequences in the event of PCN failure or manipulation. The Network Security Risk Model (NSRM) developed in this article provides a means of evaluating the efficacy of candidate risk management policies by modeling the baseline risk and assessing expectations of risk after the implementation of candidate measures. Where existing risk models fall short of providing adequate insight into the efficacy of candidate risk management policies due to shortcomings in their structure or formulation, the NSRM provides model structure and an associated modeling methodology that captures the relevant dynamics of cyber attacks on PCN for risk analysis. This article develops the NSRM in detail in the context of an illustrative example.

  9. Discrete dynamic modeling of cellular signaling networks.

    Science.gov (United States)

    Albert, Réka; Wang, Rui-Sheng

    2009-01-01

    Understanding signal transduction in cellular systems is a central issue in systems biology. Numerous experiments from different laboratories generate an abundance of individual components and causal interactions mediating environmental and developmental signals. However, for many signal transduction systems there is insufficient information on the overall structure and the molecular mechanisms involved in the signaling network. Moreover, lack of kinetic and temporal information makes it difficult to construct quantitative models of signal transduction pathways. Discrete dynamic modeling, combined with network analysis, provides an effective way to integrate fragmentary knowledge of regulatory interactions into a predictive mathematical model which is able to describe the time evolution of the system without the requirement for kinetic parameters. This chapter introduces the fundamental concepts of discrete dynamic modeling, particularly focusing on Boolean dynamic models. We describe this method step-by-step in the context of cellular signaling networks. Several variants of Boolean dynamic models including threshold Boolean networks and piecewise linear systems are also covered, followed by two examples of successful application of discrete dynamic modeling in cell biology.

  10. Neural network modeling of associative memory: Beyond the Hopfield model

    Science.gov (United States)

    Dasgupta, Chandan

    1992-07-01

    A number of neural network models, in which fixed-point and limit-cycle attractors of the underlying dynamics are used to store and associatively recall information, are described. In the first class of models, a hierarchical structure is used to store an exponentially large number of strongly correlated memories. The second class of models uses limit cycles to store and retrieve individual memories. A neurobiologically plausible network that generates low-amplitude periodic variations of activity, similar to the oscillations observed in electroencephalographic recordings, is also described. Results obtained from analytic and numerical studies of the properties of these networks are discussed.

  11. Modelling students' knowledge organisation: Genealogical conceptual networks

    Science.gov (United States)

    Koponen, Ismo T.; Nousiainen, Maija

    2018-04-01

    Learning scientific knowledge is largely based on understanding what are its key concepts and how they are related. The relational structure of concepts also affects how concepts are introduced in teaching scientific knowledge. We model here how students organise their knowledge when they represent their understanding of how physics concepts are related. The model is based on assumptions that students use simple basic linking-motifs in introducing new concepts and mostly relate them to concepts that were introduced a few steps earlier, i.e. following a genealogical ordering. The resulting genealogical networks have relatively high local clustering coefficients of nodes but otherwise resemble networks obtained with an identical degree distribution of nodes but with random linking between them (i.e. the configuration-model). However, a few key nodes having a special structural role emerge and these nodes have a higher than average communicability betweenness centralities. These features agree with the empirically found properties of students' concept networks.

  12. Modelling Users` Trust in Online Social Networks

    Directory of Open Access Journals (Sweden)

    Iacob Cătoiu

    2014-02-01

    Full Text Available Previous studies (McKnight, Lankton and Tripp, 2011; Liao, Lui and Chen, 2011 have shown the crucial role of trust when choosing to disclose sensitive information online. This is the case of online social networks users, who must disclose a certain amount of personal data in order to gain access to these online services. Taking into account privacy calculus model and the risk/benefit ratio, we propose a model of users’ trust in online social networks with four variables. We have adapted metrics for the purpose of our study and we have assessed their reliability and validity. We use a Partial Least Squares (PLS based structural equation modelling analysis, which validated all our initial assumptions, indicating that our three predictors (privacy concerns, perceived benefits and perceived risks explain 48% of the variation of users’ trust in online social networks, the resulting variable of our study. We also discuss the implications and further research opportunities of our study.

  13. The Devil in the Dark: A Fully Self-Consistent Seismic Model for Venus

    Science.gov (United States)

    Unterborn, C. T.; Schmerr, N. C.; Irving, J. C. E.

    2017-12-01

    The bulk composition and structure of Venus is unknown despite accounting for 40% of the mass of all the terrestrial planets in our Solar System. As we expand the scope of planetary science to include those planets around other stars, the lack of measurements of basic planetary properties such as moment of inertia, core-size and thermal profile for Venus hinders our ability to compare the potential uniqueness of the Earth and our Solar System to other planetary systems. Here we present fully self-consistent, whole-planet density and seismic velocity profiles calculated using the ExoPlex and BurnMan software packages for various potential Venusian compositions. Using these models, we explore the seismological implications of the different thermal and compositional initial conditions, taking into account phase transitions due to changes in pressure, temperature as well as composition. Using mass-radius constraints, we examine both the centre frequencies of normal mode oscillations and the waveforms and travel times of body waves. Seismic phases which interact with the core, phase transitions in the mantle, and shallower parts of Venus are considered. We also consider the detectability and transmission of these seismic waves from within the dense atmosphere of Venus. Our work provides coupled compositional-seismological reference models for the terrestrial planet in our Solar System of which we know the least. Furthermore, these results point to the potential wealth of fundamental scientific insights into Venus and Earth, as well as exoplanets, which could be gained by including a seismometer on future planetary exploration missions to Venus, the devil in the dark.

  14. Self-consistent modeling of radio-frequency plasma generation in stellarators

    Energy Technology Data Exchange (ETDEWEB)

    Moiseenko, V. E., E-mail: moiseenk@ipp.kharkov.ua; Stadnik, Yu. S., E-mail: stadnikys@kipt.kharkov.ua [National Academy of Sciences of Ukraine, National Science Center Kharkov Institute of Physics and Technology (Ukraine); Lysoivan, A. I., E-mail: a.lyssoivan@fz-juelich.de [Royal Military Academy, EURATOM-Belgian State Association, Laboratory for Plasma Physics (Belgium); Korovin, V. B. [National Academy of Sciences of Ukraine, National Science Center Kharkov Institute of Physics and Technology (Ukraine)

    2013-11-15

    A self-consistent model of radio-frequency (RF) plasma generation in stellarators in the ion cyclotron frequency range is described. The model includes equations for the particle and energy balance and boundary conditions for Maxwell’s equations. The equation of charged particle balance takes into account the influx of particles due to ionization and their loss via diffusion and convection. The equation of electron energy balance takes into account the RF heating power source, as well as energy losses due to the excitation and electron-impact ionization of gas atoms, energy exchange via Coulomb collisions, and plasma heat conduction. The deposited RF power is calculated by solving the boundary problem for Maxwell’s equations. When describing the dissipation of the energy of the RF field, collisional absorption and Landau damping are taken into account. At each time step, Maxwell’s equations are solved for the current profiles of the plasma density and plasma temperature. The calculations are performed for a cylindrical plasma. The plasma is assumed to be axisymmetric and homogeneous along the plasma column. The system of balance equations is solved using the Crank-Nicholson scheme. Maxwell’s equations are solved in a one-dimensional approximation by using the Fourier transformation along the azimuthal and longitudinal coordinates. Results of simulations of RF plasma generation in the Uragan-2M stellarator by using a frame antenna operating at frequencies lower than the ion cyclotron frequency are presented. The calculations show that the slow wave generated by the antenna is efficiently absorbed at the periphery of the plasma column, due to which only a small fraction of the input power reaches the confinement region. As a result, the temperature on the axis of the plasma column remains low, whereas at the periphery it is substantially higher. This leads to strong absorption of the RF field at the periphery via the Landau mechanism.

  15. A Model of Network Porosity

    Science.gov (United States)

    2016-02-04

    of complex systems [1]. Although the ODD protocol was originally intended for individual-based or agent-based models ( ABM ), we adopt this protocol for...applies to information transfer between air-gapped systems . Trust relationships between devices (e.g. a trust relationship created by a domain controller...prevention systems , and data leakage protection systems . 2.2 ATTACKER The model specifies an attacker who gains access to internal enclaves by

  16. Reproducibility and consistency of proteomic experiments on natural populations of a non-model aquatic insect.

    Science.gov (United States)

    Hidalgo-Galiana, Amparo; Monge, Marta; Biron, David G; Canals, Francesc; Ribera, Ignacio; Cieslak, Alexandra

    2014-01-01

    Population proteomics has a great potential to address evolutionary and ecological questions, but its use in wild populations of non-model organisms is hampered by uncontrolled sources of variation. Here we compare the response to temperature extremes of two geographically distant populations of a diving beetle species (Agabus ramblae) using 2-D DIGE. After one week of acclimation in the laboratory under standard conditions, a third of the specimens of each population were placed at either 4 or 27°C for 12 h, with another third left as a control. We then compared the protein expression level of three replicated samples of 2-3 specimens for each treatment. Within each population, variation between replicated samples of the same treatment was always lower than variation between treatments, except for some control samples that retained a wider range of expression levels. The two populations had a similar response, without significant differences in the number of protein spots over- or under-expressed in the pairwise comparisons between treatments. We identified exemplary proteins among those differently expressed between treatments, which proved to be proteins known to be related to thermal response or stress. Overall, our results indicate that specimens collected in the wild are suitable for proteomic analyses, as the additional sources of variation were not enough to mask the consistency and reproducibility of the response to the temperature treatments.

  17. A consistent model for the equilibrium thermodynamic functions of partially ionized flibe plasma with Coulomb corrections

    International Nuclear Information System (INIS)

    Zaghloul, Mofreh R.

    2003-01-01

    Flibe (2LiF-BeF2) is a molten salt that has been chosen as the coolant and breeding material in many design studies of the inertial confinement fusion (ICF) chamber. Flibe plasmas are to be generated in the ICF chamber in a wide range of temperatures and densities. These plasmas are more complex than the plasma of any single chemical species. Nevertheless, the composition and thermodynamic properties of the resulting flibe plasmas are needed for the gas dynamics calculations and the determination of other design parameters in the ICF chamber. In this paper, a simple consistent model for determining the detailed plasma composition and thermodynamic functions of high-temperature, fully dissociated and partially ionized flibe gas is presented and used to calculate different thermodynamic properties of interest to fusion applications. The computed properties include the average ionization state; kinetic pressure; internal energy; specific heats; adiabatic exponent, as well as the sound speed. The presented results are computed under the assumptions of local thermodynamic equilibrium (LTE) and electro-neutrality. A criterion for the validity of the LTE assumption is presented and applied to the computed results. Other attempts in the literature are assessed with their implied inaccuracies pointed out and discussed

  18. A fully kinetic, self-consistent particle simulation model of the collisionless plasma--sheath region

    International Nuclear Information System (INIS)

    Procassini, R.J.; Birdsall, C.K.; Morse, E.C.

    1990-01-01

    A fully kinetic particle-in-cell (PIC) model is used to self-consistently determine the steady-state potential profile in a collisionless plasma that contacts a floating, absorbing boundary. To balance the flow of particles to the wall, a distributed source region is used to inject particles into the one-dimensional system. The effect of the particle source distribution function on the source region and collector sheath potential drops, and particle velocity distributions is investigated. The ion source functions proposed by Emmert et al. [Phys. Fluids 23, 803 (1980)] and Bissell and Johnson [Phys. Fluids 30, 779 (1987)] (and various combinations of these) are used for the injection of both ions and electrons. The values of the potential drops obtained from the PIC simulations are compared to those from the theories of Emmert et al., Bissell and Johnson, and Scheuer and Emmert [Phys. Fluids 31, 3645 (1988)], all of which assume that the electron density is related to the plasma potential via the Boltzmann relation. The values of the source region and total potential drop are found to depend on the choice of the electron source function, as well as the ion source function. The question of an infinite electric field at the plasma--sheath interface, which arises in the analyses of Bissell and Johnson and Scheuer and Emmert, is also addressed

  19. Comprehensive and fully self-consistent modeling of modern semiconductor lasers

    International Nuclear Information System (INIS)

    Nakwaski, W.; Sarzał, R. P.

    2016-01-01

    The fully self-consistent model of modern semiconductor lasers used to design their advanced structures and to understand more deeply their properties is given in the present paper. Operation of semiconductor lasers depends not only on many optical, electrical, thermal, recombination, and sometimes mechanical phenomena taking place within their volumes but also on numerous mutual interactions between these phenomena. Their experimental investigation is quite complex, mostly because of miniature device sizes. Therefore, the most convenient and exact method to analyze expected laser operation and to determine laser optimal structures for various applications is to examine the details of their performance with the aid of a simulation of laser operation in various considered conditions. Such a simulation of an operation of semiconductor lasers is presented in this paper in a full complexity of all mutual interactions between the above individual physical processes. In particular, the hole-burning effect has been discussed. The impacts on laser performance introduced by oxide apertures (their sizes and localization) have been analyzed in detail. Also, some important details concerning the operation of various types of semiconductor lasers are discussed. The results of some applications of semiconductor lasers are shown for successive laser structures. (paper)

  20. A self-consistent first-principle based approach to model carrier mobility in organic materials

    International Nuclear Information System (INIS)

    Meded, Velimir; Friederich, Pascal; Symalla, Franz; Neumann, Tobias; Danilov, Denis; Wenzel, Wolfgang

    2015-01-01

    Transport through thin organic amorphous films, utilized in OLEDs and OPVs, has been a challenge to model by using ab-initio methods. Charge carrier mobility depends strongly on the disorder strength and reorganization energy, both of which are significantly affected by the details in environment of each molecule. Here we present a multi-scale approach to describe carrier mobility in which the materials morphology is generated using DEPOSIT, a Monte Carlo based atomistic simulation approach, or, alternatively by molecular dynamics calculations performed with GROMACS. From this morphology we extract the material specific hopping rates, as well as the on-site energies using a fully self-consistent embedding approach to compute the electronic structure parameters, which are then used in an analytic expression for the carrier mobility. We apply this strategy to compute the carrier mobility for a set of widely studied molecules and obtain good agreement between experiment and theory varying over several orders of magnitude in the mobility without any freely adjustable parameters. The work focuses on the quantum mechanical step of the multi-scale workflow, explains the concept along with the recently published workflow optimization, which combines density functional with semi-empirical tight binding approaches. This is followed by discussion on the analytic formula and its agreement with established percolation fits as well as kinetic Monte Carlo numerical approaches. Finally, we skatch an unified multi-disciplinary approach that integrates materials science simulation and high performance computing, developed within EU project MMM@HPC

  1. Modeling and optimization of potable water network

    Energy Technology Data Exchange (ETDEWEB)

    Djebedjian, B.; Rayan, M.A. [Mansoura Univ., El-Mansoura (Egypt); Herrick, A. [Suez Canal Authority, Ismailia (Egypt)

    2000-07-01

    Software was developed in order to optimize the design of water distribution systems and pipe networks. While satisfying all the constraints imposed such as pipe diameter and nodal pressure, it was based on a mathematical model treating looped networks. The optimum network configuration and cost are determined considering parameters like pipe diameter, flow rate, corresponding pressure and hydraulic losses. It must be understood that minimum cost is relative to the different objective functions selected. The determination of the proper objective function often depends on the operating policies of a particular company. The solution for the optimization technique was obtained by using a non-linear technique. To solve the optimal design of network, the model was derived using the sequential unconstrained minimization technique (SUMT) of Fiacco and McCormick, which decreased the number of iterations required. The pipe diameters initially assumed were successively adjusted to correspond to the existing commercial pipe diameters. The technique was then applied to a two-loop network without pumps or valves. Fed by gravity, it comprised eight pipes, 1000 m long each. The first evaluation of the method proved satisfactory. As with other methods, it failed to find the global optimum. In the future, research efforts will be directed to the optimization of networks with pumps and reservoirs. 24 refs., 3 tabs., 1 fig.

  2. On the Modeling and Analysis of Heterogeneous Radio Access Networks using a Poisson Cluster Process

    DEFF Research Database (Denmark)

    Suryaprakash, Vinay; Møller, Jesper; Fettweis, Gerhard P.

    processes, some of which are alluded to (later) in this paper. We model a heterogeneous network consisting of two types of base stations by using a particular Poisson cluster process model. The main contributions are two-fold. First, a complete description of the interference in heterogeneous networks...

  3. Modelling dendritic ecological networks in space: An integrated network perspective

    Science.gov (United States)

    Erin E. Peterson; Jay M. Ver Hoef; Dan J. Isaak; Jeffrey A. Falke; Marie-Josee Fortin; Chris E. Jordan; Kristina McNyset; Pascal Monestiez; Aaron S. Ruesch; Aritra Sengupta; Nicholas Som; E. Ashley Steel; David M. Theobald; Christian E. Torgersen; Seth J. Wenger

    2013-01-01

    Dendritic ecological networks (DENs) are a unique form of ecological networks that exhibit a dendritic network topology (e.g. stream and cave networks or plant architecture). DENs have a dual spatial representation; as points within the network and as points in geographical space. Consequently, some analytical methods used to quantify relationships in other types of...

  4. PREDIKSI FOREX MENGGUNAKAN MODEL NEURAL NETWORK

    Directory of Open Access Journals (Sweden)

    R. Hadapiningradja Kusumodestoni

    2015-11-01

    Full Text Available ABSTRAK Prediksi adalah salah satu teknik yang paling penting dalam menjalankan bisnis forex. Keputusan dalam memprediksi adalah sangatlah penting, karena dengan prediksi dapat membantu mengetahui nilai forex di waktu tertentu kedepan sehingga dapat mengurangi resiko kerugian. Tujuan dari penelitian ini dimaksudkan memprediksi bisnis fores menggunakan model neural network dengan data time series per 1 menit untuk mengetahui nilai akurasi prediksi sehingga dapat mengurangi resiko dalam menjalankan bisnis forex. Metode penelitian pada penelitian ini meliputi metode pengumpulan data kemudian dilanjutkan ke metode training, learning, testing menggunakan neural network. Setelah di evaluasi hasil penelitian ini menunjukan bahwa penerapan algoritma Neural Network mampu untuk memprediksi forex dengan tingkat akurasi prediksi 0.431 +/- 0.096 sehingga dengan prediksi ini dapat membantu mengurangi resiko dalam menjalankan bisnis forex. Kata kunci: prediksi, forex, neural network.

  5. Artificial neural network cardiopulmonary modeling and diagnosis

    Science.gov (United States)

    Kangas, Lars J.; Keller, Paul E.

    1997-01-01

    The present invention is a method of diagnosing a cardiopulmonary condition in an individual by comparing data from a progressive multi-stage test for the individual to a non-linear multi-variate model, preferably a recurrent artificial neural network having sensor fusion. The present invention relies on a cardiovascular model developed from physiological measurements of an individual. Any differences between the modeled parameters and the parameters of an individual at a given time are used for diagnosis.

  6. Multiscale Modeling at Nanointerfaces: Polymer Thin Film Materials Discovery via Thermomechanically Consistent Coarse Graining

    Science.gov (United States)

    Hsu, David D.

    Due to high nanointerfacial area to volume ratio, the properties of "nanoconfined" polymer thin films, blends, and composites become highly altered compared to their bulk homopolymer analogues. Understanding the structure-property mechanisms underlying this effect is an active area of research. However, despite extensive work, a fundamental framework for predicting the local and system-averaged thermomechanical properties as a function of configuration and polymer species has yet to be established. Towards bridging this gap, here, we present a novel, systematic coarse-graining (CG) method which is able to capture quantitatively, the thermomechanical properties of real polymer systems in bulk and in nanoconfined geometries. This method, which we call thermomechanically consistent coarse-graining (TCCG), is a two-bead-per-monomer CG hybrid approach through which bonded interactions are optimized to match the atomistic structure via the Iterative Boltzmann Inversion method (IBI), and nonbonded interactions are tuned to macroscopic targets through parametric studies. We validate the TCCG method by systematically developing coarse-grain models for a group of five specialized methacrylate-based polymers including poly(methyl methacrylate) (PMMA). Good correlation with bulk all-atom (AA) simulations and experiments is found for the temperature-dependent glass transition temperature (Tg) Flory-Fox scaling relationships, self-diffusion coefficients of liquid monomers, and modulus of elasticity. We apply this TCCG method also to bulk polystyrene (PS) using a comparable coarse-grain CG bead mapping strategy. The model demonstrates chain stiffness commensurate with experiments, and we utilize a density-correction term to improve the transferability of the elastic modulus over a 500 K range. Additionally, PS and PMMA models capture the unexplained, characteristically dissimilar scaling of Tg with the thickness of free-standing films as seen in experiments. Using vibrational

  7. Green Network Planning Model for Optical Backbones

    DEFF Research Database (Denmark)

    Gutierrez Lopez, Jose Manuel; Riaz, M. Tahir; Jensen, Michael

    2010-01-01

    on the environment in general. In network planning there are existing planning models focused on QoS provisioning, investment minimization or combinations of both and other parameters. But there is a lack of a model for designing green optical backbones. This paper presents novel ideas to be able to define......Communication networks are becoming more essential for our daily lives and critically important for industry and governments. The intense growth in the backbone traffic implies an increment of the power demands of the transmission systems. This power usage might have a significant negative effect...

  8. A Model for Telestrok Network Evaluation

    DEFF Research Database (Denmark)

    Storm, Anna; Günzel, Franziska; Theiss, Stephan

    2011-01-01

    analysis lacking, current telestroke reimbursement by third-party payers is limited to special contracts and not included in the regular billing system. Based on a systematic literature review and expert interviews with health care economists, third-party payers and neurologists, a Markov model...... was developed from the third-party payer perspective. In principle, it enables telestroke networks to conduct cost-effectiveness studies, because the majority of the required data can be extracted from health insurance companies’ databases and the telestroke network itself. The model presents a basis...

  9. Multiobjecitve Sampling Design for Calibration of Water Distribution Network Model Using Genetic Algorithm and Neural Network

    Directory of Open Access Journals (Sweden)

    Kourosh Behzadian

    2008-03-01

    Full Text Available In this paper, a novel multiobjective optimization model is presented for selecting optimal locations in the water distribution network (WDN with the aim of installing pressure loggers. The pressure data collected at optimal locations will be used later on in the calibration of the proposed WDN model. Objective functions consist of maximization of calibrated model prediction accuracy and minimization of the total cost for sampling design. In order to decrease the model run time, an optimization model has been developed using multiobjective genetic algorithm and adaptive neural network (MOGA-ANN. Neural networks (NNs are initially trained after a number of initial GA generations and periodically retrained and updated after generation of a specified number of full model-analyzed solutions. Trained NNs are replaced with the fitness evaluation of some chromosomes within the GA progress. Using cache prevents objective function evaluation of repetitive chromosomes within GA. Optimal solutions are obtained through pareto-optimal front with respect to the two objective functions. Results show that jointing NNs in MOGA for approximating portions of chromosomes’ fitness in each generation leads to considerable savings in model run time and can be promising for reducing run-time in optimization models with significant computational effort.

  10. PROJECT ACTIVITY ANALYSIS WITHOUT THE NETWORK MODEL

    Directory of Open Access Journals (Sweden)

    S. Munapo

    2012-01-01

    Full Text Available

    ENGLISH ABSTRACT: This paper presents a new procedure for analysing and managing activity sequences in projects. The new procedure determines critical activities, critical path, start times, free floats, crash limits, and other useful information without the use of the network model. Even though network models have been successfully used in project management so far, there are weaknesses associated with the use. A network is not easy to generate, and dummies that are usually associated with it make the network diagram complex – and dummy activities have no meaning in the original project management problem. The network model for projects can be avoided while still obtaining all the useful information that is required for project management. What are required are the activities, their accurate durations, and their predecessors.

    AFRIKAANSE OPSOMMING: Die navorsing beskryf ’n nuwerwetse metode vir die ontleding en bestuur van die sekwensiële aktiwiteite van projekte. Die voorgestelde metode bepaal kritiese aktiwiteite, die kritieke pad, aanvangstye, speling, verhasing, en ander groothede sonder die gebruik van ’n netwerkmodel. Die metode funksioneer bevredigend in die praktyk, en omseil die administratiewe rompslomp van die tradisionele netwerkmodelle.

  11. Quantum-Like Bayesian Networks for Modeling Decision Making

    Directory of Open Access Journals (Sweden)

    Catarina eMoreira

    2016-01-01

    Full Text Available In this work, we explore an alternative quantum structure to perform quantum probabilistic inferences to accommodate the paradoxical findings of the Sure Thing Principle. We propose a Quantum-Like Bayesian Network, which consists in replacing classical probabilities by quantum probability amplitudes. However, since this approach suffers from the problem of exponential growth of quantum parameters, we also propose a similarity heuristic that automatically fits quantum parameters through vector similarities. This makes the proposed model general and predictive in contrast to the current state of the art models, which cannot be generalized for more complex decision scenarios and that only provide an explanatory nature for the observed paradoxes. In the end, the model that we propose consists in a nonparametric method for estimating inference effects from a statistical point of view. It is a statistical model that is simpler than the previous quantum dynamic and quantum-like models proposed in the literature. We tested the proposed network with several empirical data from the literature, mainly from the Prisoner's Dilemma game and the Two Stage Gambling game. The results obtained show that the proposed quantum Bayesian Network is a general method that can accommodate violations of the laws of classical probability theory and make accurate predictions regarding human decision-making in these scenarios.

  12. Mobility Models for Next Generation Wireless Networks Ad Hoc, Vehicular and Mesh Networks

    CERN Document Server

    Santi, Paolo

    2012-01-01

    Mobility Models for Next Generation Wireless Networks: Ad Hoc, Vehicular and Mesh Networks provides the reader with an overview of mobility modelling, encompassing both theoretical and practical aspects related to the challenging mobility modelling task. It also: Provides up-to-date coverage of mobility models for next generation wireless networksOffers an in-depth discussion of the most representative mobility models for major next generation wireless network application scenarios, including WLAN/mesh networks, vehicular networks, wireless sensor networks, and

  13. Modeling Renewable Penertration Using a Network Economic Model

    Science.gov (United States)

    Lamont, A.

    2001-03-01

    This paper evaluates the accuracy of a network economic modeling approach in designing energy systems having renewable and conventional generators. The network approach models the system as a network of processes such as demands, generators, markets, and resources. The model reaches a solution by exchanging prices and quantity information between the nodes of the system. This formulation is very flexible and takes very little time to build and modify models. This paper reports an experiment designing a system with photovoltaic and base and peak fossil generators. The level of PV penetration as a function of its price and the capacities of the fossil generators were determined using the network approach and using an exact, analytic approach. It is found that the two methods agree very closely in terms of the optimal capacities and are nearly identical in terms of annual system costs.

  14. A self-consistent kinetic modeling of a 1-D, bounded, plasma in ...

    Indian Academy of Sciences (India)

    ions, consistent with the idea of scattering off a random collection of stationary scattering points, while it yields a constant for slow ions, consistent with the idea of collisions experienced by a stationary particle in an ideal gas. For this treatment, o has been assumed independent of position. Pramana – J. Phys., Vol. 55, Nos 5 ...

  15. Security Modeling on the Supply Chain Networks

    Directory of Open Access Journals (Sweden)

    Marn-Ling Shing

    2007-10-01

    Full Text Available In order to keep the price down, a purchaser sends out the request for quotation to a group of suppliers in a supply chain network. The purchaser will then choose a supplier with the best combination of price and quality. A potential supplier will try to collect the related information about other suppliers so he/she can offer the best bid to the purchaser. Therefore, confidentiality becomes an important consideration for the design of a supply chain network. Chen et al. have proposed the application of the Bell-LaPadula model in the design of a secured supply chain network. In the Bell-LaPadula model, a subject can be in one of different security clearances and an object can be in one of various security classifications. All the possible combinations of (Security Clearance, Classification pair in the Bell-LaPadula model can be thought as different states in the Markov Chain model. This paper extends the work done by Chen et al., provides more details on the Markov Chain model and illustrates how to use it to monitor the security state transition in the supply chain network.

  16. An evolving model of online bipartite networks

    Science.gov (United States)

    Zhang, Chu-Xu; Zhang, Zi-Ke; Liu, Chuang

    2013-12-01

    Understanding the structure and evolution of online bipartite networks is a significant task since they play a crucial role in various e-commerce services nowadays. Recently, various attempts have been tried to propose different models, resulting in either power-law or exponential degree distributions. However, many empirical results show that the user degree distribution actually follows a shifted power-law distribution, the so-called Mandelbrot’s law, which cannot be fully described by previous models. In this paper, we propose an evolving model, considering two different user behaviors: random and preferential attachment. Extensive empirical results on two real bipartite networks, Delicious and CiteULike, show that the theoretical model can well characterize the structure of real networks for both user and object degree distributions. In addition, we introduce a structural parameter p, to demonstrate that the hybrid user behavior leads to the shifted power-law degree distribution, and the region of power-law tail will increase with the increment of p. The proposed model might shed some lights in understanding the underlying laws governing the structure of real online bipartite networks.

  17. An autocatalytic network model for stock markets

    Science.gov (United States)

    Caetano, Marco Antonio Leonel; Yoneyama, Takashi

    2015-02-01

    The stock prices of companies with businesses that are closely related within a specific sector of economy might exhibit movement patterns and correlations in their dynamics. The idea in this work is to use the concept of autocatalytic network to model such correlations and patterns in the trends exhibited by the expected returns. The trends are expressed in terms of positive or negative returns within each fixed time interval. The time series derived from these trends is then used to represent the movement patterns by a probabilistic boolean network with transitions modeled as an autocatalytic network. The proposed method might be of value in short term forecasting and identification of dependencies. The method is illustrated with a case study based on four stocks of companies in the field of natural resource and technology.

  18. Self-consistent Maxwell-Bloch model of quantum-dot photonic-crystal-cavity lasers

    Science.gov (United States)

    Cartar, William; Mørk, Jesper; Hughes, Stephen

    2017-08-01

    We present a powerful computational approach to simulate the threshold behavior of photonic-crystal quantum-dot (QD) lasers. Using a finite-difference time-domain (FDTD) technique, Maxwell-Bloch equations representing a system of thousands of statistically independent and randomly positioned two-level emitters are solved numerically. Phenomenological pure dephasing and incoherent pumping is added to the optical Bloch equations to allow for a dynamical lasing regime, but the cavity-mediated radiative dynamics and gain coupling of each QD dipole (artificial atom) is contained self-consistently within the model. These Maxwell-Bloch equations are implemented by using Lumerical's flexible material plug-in tool, which allows a user to define additional equations of motion for the nonlinear polarization. We implement the gain ensemble within triangular-lattice photonic-crystal cavities of various length N (where N refers to the number of missing holes), and investigate the cavity mode characteristics and the threshold regime as a function of cavity length. We develop effective two-dimensional model simulations which are derived after studying the full three-dimensional passive material structures by matching the cavity quality factors and resonance properties. We also demonstrate how to obtain the correct point-dipole radiative decay rate from Fermi's golden rule, which is captured naturally by the FDTD method. Our numerical simulations predict that the pump threshold plateaus around cavity lengths greater than N =9 , which we identify as a consequence of the complex spatial dynamics and gain coupling from the inhomogeneous QD ensemble. This behavior is not expected from simple rate-equation analysis commonly adopted in the literature, but is in qualitative agreement with recent experiments. Single-mode to multimode lasing is also observed, depending on the spectral peak frequency of the QD ensemble. Using a statistical modal analysis of the average decay rates, we also

  19. Linking lipid architecture to bilayer structure and mechanics using self-consistent field modelling

    Energy Technology Data Exchange (ETDEWEB)

    Pera, H.; Kleijn, J. M.; Leermakers, F. A. M., E-mail: Frans.leermakers@wur.nl [Laboratory of Physical Chemistry and Colloid Science, Wageningen University, Dreijenplein 6, 6307 HB Wageningen (Netherlands)

    2014-02-14

    To understand how lipid architecture determines the lipid bilayer structure and its mechanics, we implement a molecularly detailed model that uses the self-consistent field theory. This numerical model accurately predicts parameters such as Helfrichs mean and Gaussian bending modulus k{sub c} and k{sup ¯} and the preferred monolayer curvature J{sub 0}{sup m}, and also delivers structural membrane properties like the core thickness, and head group position and orientation. We studied how these mechanical parameters vary with system variations, such as lipid tail length, membrane composition, and those parameters that control the lipid tail and head group solvent quality. For the membrane composition, negatively charged phosphatidylglycerol (PG) or zwitterionic, phosphatidylcholine (PC), and -ethanolamine (PE) lipids were used. In line with experimental findings, we find that the values of k{sub c} and the area compression modulus k{sub A} are always positive. They respond similarly to parameters that affect the core thickness, but differently to parameters that affect the head group properties. We found that the trends for k{sup ¯} and J{sub 0}{sup m} can be rationalised by the concept of Israelachivili's surfactant packing parameter, and that both k{sup ¯} and J{sub 0}{sup m} change sign with relevant parameter changes. Although typically k{sup ¯}<0, membranes can form stable cubic phases when the Gaussian bending modulus becomes positive, which occurs with membranes composed of PC lipids with long tails. Similarly, negative monolayer curvatures appear when a small head group such as PE is combined with long lipid tails, which hints towards the stability of inverse hexagonal phases at the cost of the bilayer topology. To prevent the destabilisation of bilayers, PG lipids can be mixed into these PC or PE lipid membranes. Progressive loading of bilayers with PG lipids lead to highly charged membranes, resulting in J{sub 0}{sup m}≫0, especially at low ionic

  20. Hydrometeorological network for flood monitoring and modeling

    Science.gov (United States)

    Efstratiadis, Andreas; Koussis, Antonis D.; Lykoudis, Spyros; Koukouvinos, Antonis; Christofides, Antonis; Karavokiros, George; Kappos, Nikos; Mamassis, Nikos; Koutsoyiannis, Demetris

    2013-08-01

    Due to its highly fragmented geomorphology, Greece comprises hundreds of small- to medium-size hydrological basins, in which often the terrain is fairly steep and the streamflow regime ephemeral. These are typically affected by flash floods, occasionally causing severe damages. Yet, the vast majority of them lack flow-gauging infrastructure providing systematic hydrometric data at fine time scales. This has obvious impacts on the quality and reliability of flood studies, which typically use simplistic approaches for ungauged basins that do not consider local peculiarities in sufficient detail. In order to provide a consistent framework for flood design and to ensure realistic predictions of the flood risk -a key issue of the 2007/60/EC Directive- it is essential to improve the monitoring infrastructures by taking advantage of modern technologies for remote control and data management. In this context and in the research project DEUCALION, we have recently installed and are operating, in four pilot river basins, a telemetry-based hydro-meteorological network that comprises automatic stations and is linked to and supported by relevant software. The hydrometric stations measure stage, using 50-kHz ultrasonic pulses or piezometric sensors, or both stage (piezometric) and velocity via acoustic Doppler radar; all measurements are being temperature-corrected. The meteorological stations record air temperature, pressure, relative humidity, wind speed and direction, and precipitation. Data transfer is made via GPRS or mobile telephony modems. The monitoring network is supported by a web-based application for storage, visualization and management of geographical and hydro-meteorological data (ENHYDRIS), a software tool for data analysis and processing (HYDROGNOMON), as well as an advanced model for flood simulation (HYDROGEIOS). The recorded hydro-meteorological observations are accessible over the Internet through the www-application. The system is operational and its

  1. Self-consistent tight-binding model of B and N doping in graphene

    DEFF Research Database (Denmark)

    Pedersen, Thomas Garm; Pedersen, Jesper Goor

    2013-01-01

    . The impurity potential depends sensitively on the impurity occupancy, leading to a self-consistency requirement. We solve this problem using the impurity Green's function and determine the self-consistent local density of states at the impurity site and, thereby, identify acceptor and donor energy resonances.......Boron and nitrogen substitutional impurities in graphene are analyzed using a self-consistent tight-binding approach. An analytical result for the impurity Green's function is derived taking broken electron-hole symmetry into account and validated by comparison to numerical diagonalization...

  2. Keystone Business Models for Network Security Processors

    Directory of Open Access Journals (Sweden)

    Arthur Low

    2013-07-01

    Full Text Available Network security processors are critical components of high-performance systems built for cybersecurity. Development of a network security processor requires multi-domain experience in semiconductors and complex software security applications, and multiple iterations of both software and hardware implementations. Limited by the business models in use today, such an arduous task can be undertaken only by large incumbent companies and government organizations. Neither the “fabless semiconductor” models nor the silicon intellectual-property licensing (“IP-licensing” models allow small technology companies to successfully compete. This article describes an alternative approach that produces an ongoing stream of novel network security processors for niche markets through continuous innovation by both large and small companies. This approach, referred to here as the "business ecosystem model for network security processors", includes a flexible and reconfigurable technology platform, a “keystone” business model for the company that maintains the platform architecture, and an extended ecosystem of companies that both contribute and share in the value created by innovation. New opportunities for business model innovation by participating companies are made possible by the ecosystem model. This ecosystem model builds on: i the lessons learned from the experience of the first author as a senior integrated circuit architect for providers of public-key cryptography solutions and as the owner of a semiconductor startup, and ii the latest scholarly research on technology entrepreneurship, business models, platforms, and business ecosystems. This article will be of interest to all technology entrepreneurs, but it will be of particular interest to owners of small companies that provide security solutions and to specialized security professionals seeking to launch their own companies.

  3. Modeling and Simulation Network Data Standards

    Science.gov (United States)

    2011-09-30

    approaches . 2.3. JNAT. JNAT is a Web application that provides connectivity and network analysis capability. JNAT uses propagation models and low-fidelity...COMBATXXI Movement Logger Data Output Dictionary. Field # Geocentric Coordinates (GCC) Heading Geodetic Coordinates (GDC) Heading Universal...B-8 Field # Geocentric Coordinates (GCC) Heading Geodetic Coordinates (GDC) Heading Universal Transverse Mercator (UTM) Heading

  4. The Kuramoto model in complex networks

    Science.gov (United States)

    Rodrigues, Francisco A.; Peron, Thomas K. DM.; Ji, Peng; Kurths, Jürgen

    2016-01-01

    Synchronization of an ensemble of oscillators is an emergent phenomenon present in several complex systems, ranging from social and physical to biological and technological systems. The most successful approach to describe how coherent behavior emerges in these complex systems is given by the paradigmatic Kuramoto model. This model has been traditionally studied in complete graphs. However, besides being intrinsically dynamical, complex systems present very heterogeneous structure, which can be represented as complex networks. This report is dedicated to review main contributions in the field of synchronization in networks of Kuramoto oscillators. In particular, we provide an overview of the impact of network patterns on the local and global dynamics of coupled phase oscillators. We cover many relevant topics, which encompass a description of the most used analytical approaches and the analysis of several numerical results. Furthermore, we discuss recent developments on variations of the Kuramoto model in networks, including the presence of noise and inertia. The rich potential for applications is discussed for special fields in engineering, neuroscience, physics and Earth science. Finally, we conclude by discussing problems that remain open after the last decade of intensive research on the Kuramoto model and point out some promising directions for future research.

  5. An architectural model for network interconnection

    NARCIS (Netherlands)

    van Sinderen, Marten J.; Vissers, C.A.; Kalin, T.

    1983-01-01

    This paper presents a technique of successive decomposition of a common users' activity to illustrate the problems of network interconnection. The criteria derived from this approach offer a structuring principle which is used to develop an architectural model that embeds heterogeneous subnetworks

  6. Computational Modeling of Complex Protein Activity Networks

    NARCIS (Netherlands)

    Schivo, Stefano; Leijten, Jeroen; Karperien, Marcel; Post, Janine N.; Prignet, Claude

    2017-01-01

    Because of the numerous entities interacting, the complexity of the networks that regulate cell fate makes it impossible to analyze and understand them using the human brain alone. Computational modeling is a powerful method to unravel complex systems. We recently described the development of a

  7. A Model of Mental State Transition Network

    Science.gov (United States)

    Xiang, Hua; Jiang, Peilin; Xiao, Shuang; Ren, Fuji; Kuroiwa, Shingo

    Emotion is one of the most essential and basic attributes of human intelligence. Current AI (Artificial Intelligence) research is concentrating on physical components of emotion, rarely is it carried out from the view of psychology directly(1). Study on the model of artificial psychology is the first step in the development of human-computer interaction. As affective computing remains unpredictable, creating a reasonable mental model becomes the primary task for building a hybrid system. A pragmatic mental model is also the fundament of some key topics such as recognition and synthesis of emotions. In this paper a Mental State Transition Network Model(2) is proposed to detect human emotions. By a series of psychological experiments, we present a new way to predict coming human's emotions depending on the various current emotional states under various stimuli. Besides, people in different genders and characters are taken into consideration in our investigation. According to the psychological experiments data derived from 200 questionnaires, a Mental State Transition Network Model for describing the transitions in distribution among the emotions and relationships between internal mental situations and external are concluded. Further more the coefficients of the mental transition network model were achieved. Comparing seven relative evaluating experiments, an average precision rate of 0.843 is achieved using a set of samples for the proposed model.

  8. UAV Trajectory Modeling Using Neural Networks

    Science.gov (United States)

    Xue, Min

    2017-01-01

    Massive small unmanned aerial vehicles are envisioned to operate in the near future. While there are lots of research problems need to be addressed before dense operations can happen, trajectory modeling remains as one of the keys to understand and develop policies, regulations, and requirements for safe and efficient unmanned aerial vehicle operations. The fidelity requirement of a small unmanned vehicle trajectory model is high because these vehicles are sensitive to winds due to their small size and low operational altitude. Both vehicle control systems and dynamic models are needed for trajectory modeling, which makes the modeling a great challenge, especially considering the fact that manufactures are not willing to share their control systems. This work proposed to use a neural network approach for modelling small unmanned vehicle's trajectory without knowing its control system and bypassing exhaustive efforts for aerodynamic parameter identification. As a proof of concept, instead of collecting data from flight tests, this work used the trajectory data generated by a mathematical vehicle model for training and testing the neural network. The results showed great promise because the trained neural network can predict 4D trajectories accurately, and prediction errors were less than 2:0 meters in both temporal and spatial dimensions.

  9. Modeling Insurgent Network Structure and Dynamics

    Science.gov (United States)

    Gabbay, Michael; Thirkill-Mackelprang, Ashley

    2010-03-01

    We present a methodology for mapping insurgent network structure based on their public rhetoric. Indicators of cooperative links between insurgent groups at both the leadership and rank-and-file levels are used, such as joint policy statements or joint operations claims. In addition, a targeting policy measure is constructed on the basis of insurgent targeting claims. Network diagrams which integrate these measures of insurgent cooperation and ideology are generated for different periods of the Iraqi and Afghan insurgencies. The network diagrams exhibit meaningful changes which track the evolution of the strategic environment faced by insurgent groups. Correlations between targeting policy and network structure indicate that insurgent targeting claims are aimed at establishing a group identity among the spectrum of rank-and-file insurgency supporters. A dynamical systems model of insurgent alliance formation and factionalism is presented which evolves the relationship between insurgent group dyads as a function of their ideological differences and their current relationships. The ability of the model to qualitatively and quantitatively capture insurgent network dynamics observed in the data is discussed.

  10. Consistent and Conservative Model Selection with the Adaptive LASSO in Stationary and Nonstationary Autoregressions

    DEFF Research Database (Denmark)

    Kock, Anders Bredahl

    2016-01-01

    We show that the adaptive Lasso is oracle efficient in stationary and nonstationary autoregressions. This means that it estimates parameters consistently, selects the correct sparsity pattern, and estimates the coefficients belonging to the relevant variables at the same asymptotic efficiency...

  11. HIV lipodystrophy case definition using artificial neural network modelling

    DEFF Research Database (Denmark)

    Ioannidis, John P A; Trikalinos, Thomas A; Law, Matthew

    2003-01-01

    OBJECTIVE: A case definition of HIV lipodystrophy has recently been developed from a combination of clinical, metabolic and imaging/body composition variables using logistic regression methods. We aimed to evaluate whether artificial neural networks could improve the diagnostic accuracy. METHODS......: The database of the case-control Lipodystrophy Case Definition Study was split into 504 subjects (265 with and 239 without lipodystrophy) used for training and 284 independent subjects (152 with and 132 without lipodystrophy) used for validation. Back-propagation neural networks with one or two middle layers...... were trained and validated. Results were compared against logistic regression models using the same information. RESULTS: Neural networks using clinical variables only (41 items) achieved consistently superior performance than logistic regression in terms of specificity, overall accuracy and area under...

  12. Hybrid simulation models of production networks

    CERN Document Server

    Kouikoglou, Vassilis S

    2001-01-01

    This book is concerned with a most important area of industrial production, that of analysis and optimization of production lines and networks using discrete-event models and simulation. The book introduces a novel approach that combines analytic models and discrete-event simulation. Unlike conventional piece-by-piece simulation, this method observes a reduced number of events between which the evolution of the system is tracked analytically. Using this hybrid approach, several models are developed for the analysis of production lines and networks. The hybrid approach combines speed and accuracy for exceptional analysis of most practical situations. A number of optimization problems, involving buffer design, workforce planning, and production control, are solved through the use of hybrid models.

  13. Propagating semantic information in biochemical network models

    Directory of Open Access Journals (Sweden)

    Schulz Marvin

    2012-01-01

    Full Text Available Abstract Background To enable automatic searches, alignments, and model combination, the elements of systems biology models need to be compared and matched across models. Elements can be identified by machine-readable biological annotations, but assigning such annotations and matching non-annotated elements is tedious work and calls for automation. Results A new method called "semantic propagation" allows the comparison of model elements based not only on their own annotations, but also on annotations of surrounding elements in the network. One may either propagate feature vectors, describing the annotations of individual elements, or quantitative similarities between elements from different models. Based on semantic propagation, we align partially annotated models and find annotations for non-annotated model elements. Conclusions Semantic propagation and model alignment are included in the open-source library semanticSBML, available on sourceforge. Online services for model alignment and for annotation prediction can be used at http://www.semanticsbml.org.

  14. Tools and Models for Integrating Multiple Cellular Networks

    Energy Technology Data Exchange (ETDEWEB)

    Gerstein, Mark [Yale Univ., New Haven, CT (United States). Gerstein Lab.

    2015-11-06

    In this grant, we have systematically investigated the integrated networks, which are responsible for the coordination of activity between metabolic pathways in prokaryotes. We have developed several computational tools to analyze the topology of the integrated networks consisting of metabolic, regulatory, and physical interaction networks. The tools are all open-source, and they are available to download from Github, and can be incorporated in the Knowledgebase. Here, we summarize our work as follow. Understanding the topology of the integrated networks is the first step toward understanding its dynamics and evolution. For Aim 1 of this grant, we have developed a novel algorithm to determine and measure the hierarchical structure of transcriptional regulatory networks [1]. The hierarchy captures the direction of information flow in the network. The algorithm is generally applicable to regulatory networks in prokaryotes, yeast and higher organisms. Integrated datasets are extremely beneficial in understanding the biology of a system in a compact manner due to the conflation of multiple layers of information. Therefore for Aim 2 of this grant, we have developed several tools and carried out analysis for integrating system-wide genomic information. To make use of the structural data, we have developed DynaSIN for protein-protein interactions networks with various dynamical interfaces [2]. We then examined the association between network topology with phenotypic effects such as gene essentiality. In particular, we have organized E. coli and S. cerevisiae transcriptional regulatory networks into hierarchies. We then correlated gene phenotypic effects by tinkering with different layers to elucidate which layers were more tolerant to perturbations [3]. In the context of evolution, we also developed a workflow to guide the comparison between different types of biological networks across various species using the concept of rewiring [4], and Furthermore, we have developed

  15. Model Predictive Control of Sewer Networks

    DEFF Research Database (Denmark)

    Pedersen, Einar B.; Herbertsson, Hannes R.; Niemann, Henrik

    2016-01-01

    The developments in solutions for management of urban drainage are of vital importance, as the amount of sewer water from urban areas continues to increase due to the increase of the world’s population and the change in the climate conditions. How a sewer network is structured, monitored and cont...... benchmark model. Due to the inherent constraints the applied approach is based on Model Predictive Control....

  16. Modelling dendritic ecological networks in space: anintegrated network perspective

    Science.gov (United States)

    Peterson, Erin E.; Ver Hoef, Jay M.; Isaak, Dan J.; Falke, Jeffrey A.; Fortin, Marie-Josée; Jordon, Chris E.; McNyset, Kristina; Monestiez, Pascal; Ruesch, Aaron S.; Sengupta, Aritra; Som, Nicholas; Steel, E. Ashley; Theobald, David M.; Torgersen, Christian E.; Wenger, Seth J.

    2013-01-01

    Dendritic ecological networks (DENs) are a unique form of ecological networks that exhibit a dendritic network topology (e.g. stream and cave networks or plant architecture). DENs have a dual spatial representation; as points within the network and as points in geographical space. Consequently, some analytical methods used to quantify relationships in other types of ecological networks, or in 2-D space, may be inadequate for studying the influence of structure and connectivity on ecological processes within DENs. We propose a conceptual taxonomy of network analysis methods that account for DEN characteristics to varying degrees and provide a synthesis of the different approaches within

  17. Self-Consistent Approach to Global Charge Neutrality in Electrokinetics: A Surface Potential Trap Model

    Directory of Open Access Journals (Sweden)

    Li Wan

    2014-03-01

    Full Text Available In this work, we treat the Poisson-Nernst-Planck (PNP equations as the basis for a consistent framework of the electrokinetic effects. The static limit of the PNP equations is shown to be the charge-conserving Poisson-Boltzmann (CCPB equation, with guaranteed charge neutrality within the computational domain. We propose a surface potential trap model that attributes an energy cost to the interfacial charge dissociation. In conjunction with the CCPB, the surface potential trap can cause a surface-specific adsorbed charge layer σ. By defining a chemical potential μ that arises from the charge neutrality constraint, a reformulated CCPB can be reduced to the form of the Poisson-Boltzmann equation, whose prediction of the Debye screening layer profile is in excellent agreement with that of the Poisson-Boltzmann equation when the channel width is much larger than the Debye length. However, important differences emerge when the channel width is small, so the Debye screening layers from the opposite sides of the channel overlap with each other. In particular, the theory automatically yields a variation of σ that is generally known as the “charge regulation” behavior, attendant with predictions of force variation as a function of nanoscale separation between two charged surfaces that are in good agreement with the experiments, with no adjustable or additional parameters. We give a generalized definition of the ζ potential that reflects the strength of the electrokinetic effect; its variations with the concentration of surface-specific and surface-nonspecific salt ions are shown to be in good agreement with the experiments. To delineate the behavior of the electro-osmotic (EO effect, the coupled PNP and Navier-Stokes equations are solved numerically under an applied electric field tangential to the fluid-solid interface. The EO effect is shown to exhibit an intrinsic time dependence that is noninertial in its origin. Under a step-function applied

  18. Unified Model for Generation Complex Networks with Utility Preferential Attachment

    International Nuclear Information System (INIS)

    Wu Jianjun; Gao Ziyou; Sun Huijun

    2006-01-01

    In this paper, based on the utility preferential attachment, we propose a new unified model to generate different network topologies such as scale-free, small-world and random networks. Moreover, a new network structure named super scale network is found, which has monopoly characteristic in our simulation experiments. Finally, the characteristics of this new network are given.

  19. Artificial Neural Network L* from different magnetospheric field models

    Science.gov (United States)

    Yu, Y.; Koller, J.; Zaharia, S. G.; Jordanova, V. K.

    2011-12-01

    The third adiabatic invariant L* plays an important role in modeling and understanding the radiation belt dynamics. The popular way to numerically obtain the L* value follows the recipe described by Roederer [1970], which is, however, slow and computational expensive. This work focuses on a new technique, which can compute the L* value in microseconds without losing much accuracy: artificial neural networks. Since L* is related to the magnetic flux enclosed by a particle drift shell, global magnetic field information needed to trace the drift shell is required. A series of currently popular empirical magnetic field models are applied to create the L* data pool using 1 million data samples which are randomly selected within a solar cycle and within the global magnetosphere. The networks, trained from the above L* data pool, can thereby be used for fairly efficient L* calculation given input parameters valid within the trained temporal and spatial range. Besides the empirical magnetospheric models, a physics-based self-consistent inner magnetosphere model (RAM-SCB) developed at LANL is also utilized to calculate L* values and then to train the L* neural network. This model better predicts the magnetospheric configuration and therefore can significantly improve the L*. The above neural network L* technique will enable, for the first time, comprehensive solar-cycle long studies of radiation belt processes. However, neural networks trained from different magnetic field models can result in different L* values, which could cause mis-interpretation of radiation belt dynamics, such as where the source of the radiation belt charged particle is and which mechanism is dominant in accelerating the particles. Such a fact calls for attention to cautiously choose a magnetospheric field model for the L* calculation.

  20. Assessing the Accuracy and Consistency of Language Proficiency Classification under Competing Measurement Models

    Science.gov (United States)

    Zhang, Bo

    2010-01-01

    This article investigates how measurement models and statistical procedures can be applied to estimate the accuracy of proficiency classification in language testing. The paper starts with a concise introduction of four measurement models: the classical test theory (CTT) model, the dichotomous item response theory (IRT) model, the testlet response…

  1. Improving Earth/Prediction Models to Improve Network Processing

    Science.gov (United States)

    Wagner, G. S.

    2017-12-01

    The United States Atomic Energy Detection System (USAEDS) primaryseismic network consists of a relatively small number of arrays andthree-component stations. The relatively small number of stationsin the USAEDS primary network make it both necessary and feasibleto optimize both station and network processing.Station processing improvements include detector tuning effortsthat use Receiver Operator Characteristic (ROC) curves to helpjudiciously set acceptable Type 1 (false) vs. Type 2 (miss) errorrates. Other station processing improvements include the use ofempirical/historical observations and continuous background noisemeasurements to compute time-varying, maximum likelihood probabilityof detection thresholds.The USAEDS network processing software makes extensive use of theazimuth and slowness information provided by frequency-wavenumberanalysis at array sites, and polarization analysis at three-componentsites. Most of the improvements in USAEDS network processing aredue to improvements in the models used to predict azimuth, slowness,and probability of detection. Kriged travel-time, azimuth andslowness corrections-and associated uncertainties-are computedusing a ground truth database. Improvements in station processingand the use of improved models for azimuth, slowness, and probabilityof detection have led to significant improvements in USADES networkprocessing.

  2. Functional model of biological neural networks.

    Science.gov (United States)

    Lo, James Ting-Ho

    2010-12-01

    A functional model of biological neural networks, called temporal hierarchical probabilistic associative memory (THPAM), is proposed in this paper. THPAM comprises functional models of dendritic trees for encoding inputs to neurons, a first type of neuron for generating spike trains, a second type of neuron for generating graded signals to modulate neurons of the first type, supervised and unsupervised Hebbian learning mechanisms for easy learning and retrieving, an arrangement of dendritic trees for maximizing generalization, hardwiring for rotation-translation-scaling invariance, and feedback connections with different delay durations for neurons to make full use of present and past informations generated by neurons in the same and higher layers. These functional models and their processing operations have many functions of biological neural networks that have not been achieved by other models in the open literature and provide logically coherent answers to many long-standing neuroscientific questions. However, biological justifications of these functional models and their processing operations are required for THPAM to qualify as a macroscopic model (or low-order approximate) of biological neural networks.

  3. Functional networks inference from rule-based machine learning models.

    Science.gov (United States)

    Lazzarini, Nicola; Widera, Paweł; Williamson, Stuart; Heer, Rakesh; Krasnogor, Natalio; Bacardit, Jaume

    2016-01-01

    Functional networks play an important role in the analysis of biological processes and systems. The inference of these networks from high-throughput (-omics) data is an area of intense research. So far, the similarity-based inference paradigm (e.g. gene co-expression) has been the most popular approach. It assumes a functional relationship between genes which are expressed at similar levels across different samples. An alternative to this paradigm is the inference of relationships from the structure of machine learning models. These models are able to capture complex relationships between variables, that often are different/complementary to the similarity-based methods. We propose a protocol to infer functional networks from machine learning models, called FuNeL. It assumes, that genes used together within a rule-based machine learning model to classify the samples, might also be functionally related at a biological level. The protocol is first tested on synthetic datasets and then evaluated on a test suite of 8 real-world datasets related to human cancer. The networks inferred from the real-world data are compared against gene co-expression networks of equal size, generated with 3 different methods. The comparison is performed from two different points of view. We analyse the enriched biological terms in the set of network nodes and the relationships between known disease-associated genes in a context of the network topology. The comparison confirms both the biological relevance and the complementary character of the knowledge captured by the FuNeL networks in relation to similarity-based methods and demonstrates its potential to identify known disease associations as core elements of the network. Finally, using a prostate cancer dataset as a case study, we confirm that the biological knowledge captured by our method is relevant to the disease and consistent with the specialised literature and with an independent dataset not used in the inference process. The

  4. On traffic modelling in GPRS networks

    DEFF Research Database (Denmark)

    Madsen, Tatiana Kozlova; Schwefel, Hans-Peter; Prasad, Ramjee

    2005-01-01

    Optimal design and dimensioning of wireless data networks, such as GPRS, requires the knowledge of traffic characteristics of different data services. This paper presents an in-detail analysis of an IP-level traffic measurements taken in an operational GPRS network. The data measurements reported...... here are done at the Gi interface. The aim of this paper is to reveal some key statistics of GPRS data applications and to validate if the existing traffic models can adequately describe traffic volume and inter-arrival time distribution for different services. Additionally, we present a method of user...

  5. A improved Network Security Situation Awareness Model

    Directory of Open Access Journals (Sweden)

    Li Fangwei

    2015-08-01

    Full Text Available In order to reflect the situation of network security assessment performance fully and accurately, a new network security situation awareness model based on information fusion was proposed. Network security situation is the result of fusion three aspects evaluation. In terms of attack, to improve the accuracy of evaluation, a situation assessment method of DDoS attack based on the information of data packet was proposed. In terms of vulnerability, a improved Common Vulnerability Scoring System (CVSS was raised and maked the assessment more comprehensive. In terms of node weights, the method of calculating the combined weights and optimizing the result by Sequence Quadratic Program (SQP algorithm which reduced the uncertainty of fusion was raised. To verify the validity and necessity of the method, a testing platform was built and used to test through evaluating 2000 DAPRA data sets. Experiments show that the method can improve the accuracy of evaluation results.

  6. Hydrologic consistency as a basis for assessing complexity of monthly water balance models for the continental United States

    Science.gov (United States)

    Martinez, Guillermo F.; Gupta, Hoshin V.

    2011-12-01

    Methods to select parsimonious and hydrologically consistent model structures are useful for evaluating dominance of hydrologic processes and representativeness of data. While information criteria (appropriately constrained to obey underlying statistical assumptions) can provide a basis for evaluating appropriate model complexity, it is not sufficient to rely upon the principle of maximum likelihood (ML) alone. We suggest that one must also call upon a "principle of hydrologic consistency," meaning that selected ML structures and parameter estimates must be constrained (as well as possible) to reproduce desired hydrological characteristics of the processes under investigation. This argument is demonstrated in the context of evaluating the suitability of candidate model structures for lumped water balance modeling across the continental United States, using data from 307 snow-free catchments. The models are constrained to satisfy several tests of hydrologic consistency, a flow space transformation is used to ensure better consistency with underlying statistical assumptions, and information criteria are used to evaluate model complexity relative to the data. The results clearly demonstrate that the principle of consistency provides a sensible basis for guiding selection of model structures and indicate strong spatial persistence of certain model structures across the continental United States. Further work to untangle reasons for model structure predominance can help to relate conceptual model structures to physical characteristics of the catchments, facilitating the task of prediction in ungaged basins.

  7. Fractional virus epidemic model on financial networks

    Directory of Open Access Journals (Sweden)

    Balci Mehmet Ali

    2016-01-01

    Full Text Available In this study, we present an epidemic model that characterizes the behavior of a financial network of globally operating stock markets. Since the long time series have a global memory effect, we represent our model by using the fractional calculus. This model operates on a network, where vertices are the stock markets and edges are constructed by the correlation distances. Thereafter, we find an analytical solution to commensurate system and use the well-known differential transform method to obtain the solution of incommensurate system of fractional differential equations. Our findings are confirmed and complemented by the data set of the relevant stock markets between 2006 and 2016. Rather than the hypothetical values, we use the Hurst Exponent of each time series to approximate the fraction size and graph theoretical concepts to obtain the variables.

  8. Using open sidewalls for modelling self-consistent lithosphere subduction dynamics

    NARCIS (Netherlands)

    Chertova, M.V.; Geenen, T.; van den Berg, A.; Spakman, W.

    2012-01-01

    Subduction modelling in regional model domains, in 2-D or 3-D, is commonly performed using closed (impermeable) vertical boundaries. Here we investigate the merits of using open boundaries for 2-D modelling of lithosphere subduction. Our experiments are focused on using open and closed (free

  9. Studying the Consistency between and within the Student Mental Models for Atomic Structure

    Science.gov (United States)

    Zarkadis, Nikolaos; Papageorgiou, George; Stamovlasis, Dimitrios

    2017-01-01

    Science education research has revealed a number of student mental models for atomic structure, among which, the one based on Bohr's model seems to be the most dominant. The aim of the current study is to investigate the coherence of these models when students apply them for the explanation of a variety of situations. For this purpose, a set of…

  10. Pedagogical Approaches Used by Faculty in Holland's Model Environments: The Role of Environmental Consistency

    Science.gov (United States)

    Smart, John C.; Ethington, Corinna A.; Umbach, Paul D.

    2009-01-01

    This study examines the extent to which faculty members in the disparate academic environments of Holland's theory devote different amounts of time in their classes to alternative pedagogical approaches and whether such differences are comparable for those in "consistent" and "inconsistent" environments. The findings show wide variations in the…

  11. A descriptive model of resting-state networks using Markov chains.

    Science.gov (United States)

    Xie, H; Pal, R; Mitra, S

    2016-08-01

    Resting-state functional connectivity (RSFC) studies considering pairwise linear correlations have attracted great interests while the underlying functional network structure still remains poorly understood. To further our understanding of RSFC, this paper presents an analysis of the resting-state networks (RSNs) based on the steady-state distributions and provides a novel angle to investigate the RSFC of multiple functional nodes. This paper evaluates the consistency of two networks based on the Hellinger distance between the steady-state distributions of the inferred Markov chain models. The results show that generated steady-state distributions of default mode network have higher consistency across subjects than random nodes from various RSNs.

  12. Northern emporia and maritime networks. Modelling past communication using archaeological network analysis

    DEFF Research Database (Denmark)

    Sindbæk, Søren Michael

    2015-01-01

    preserve patterns of thisinteraction. Formal network analysis and modelling holds the potential to identify anddemonstrate such patterns, where traditional methods often prove inadequate. Thearchaeological study of communication networks in the past, however, calls for radically different analytical...... this is not a problem of network analysis, but network synthesis: theclassic problem of cracking codes or reconstructing black-box circuits. It is proposedthat archaeological approaches to network synthesis must involve a contextualreading of network data: observations arising from individual contexts, morphologies...

  13. Performance modeling, loss networks, and statistical multiplexing

    CERN Document Server

    Mazumdar, Ravi

    2009-01-01

    This monograph presents a concise mathematical approach for modeling and analyzing the performance of communication networks with the aim of understanding the phenomenon of statistical multiplexing. The novelty of the monograph is the fresh approach and insights provided by a sample-path methodology for queueing models that highlights the important ideas of Palm distributions associated with traffic models and their role in performance measures. Also presented are recent ideas of large buffer, and many sources asymptotics that play an important role in understanding statistical multiplexing. I

  14. Artificial Neural Network Model for Predicting Compressive

    Directory of Open Access Journals (Sweden)

    Salim T. Yousif

    2013-05-01

    Full Text Available   Compressive strength of concrete is a commonly used criterion in evaluating concrete. Although testing of the compressive strength of concrete specimens is done routinely, it is performed on the 28th day after concrete placement. Therefore, strength estimation of concrete at early time is highly desirable. This study presents the effort in applying neural network-based system identification techniques to predict the compressive strength of concrete based on concrete mix proportions, maximum aggregate size (MAS, and slump of fresh concrete. Back-propagation neural networks model is successively developed, trained, and tested using actual data sets of concrete mix proportions gathered from literature.    The test of the model by un-used data within the range of input parameters shows that the maximum absolute error for model is about 20% and 88% of the output results has absolute errors less than 10%. The parametric study shows that water/cement ratio (w/c is the most significant factor  affecting the output of the model.     The results showed that neural networks has strong potential as a feasible tool for predicting compressive strength of concrete.

  15. UAV Trajectory Modeling Using Neural Networks

    Science.gov (United States)

    Xue, Min

    2017-01-01

    Large amount of small Unmanned Aerial Vehicles (sUAVs) are projected to operate in the near future. Potential sUAV applications include, but not limited to, search and rescue, inspection and surveillance, aerial photography and video, precision agriculture, and parcel delivery. sUAVs are expected to operate in the uncontrolled Class G airspace, which is at or below 500 feet above ground level (AGL), where many static and dynamic constraints exist, such as ground properties and terrains, restricted areas, various winds, manned helicopters, and conflict avoidance among sUAVs. How to enable safe, efficient, and massive sUAV operations at the low altitude airspace remains a great challenge. NASA's Unmanned aircraft system Traffic Management (UTM) research initiative works on establishing infrastructure and developing policies, requirement, and rules to enable safe and efficient sUAVs' operations. To achieve this goal, it is important to gain insights of future UTM traffic operations through simulations, where the accurate trajectory model plays an extremely important role. On the other hand, like what happens in current aviation development, trajectory modeling should also serve as the foundation for any advanced concepts and tools in UTM. Accurate models of sUAV dynamics and control systems are very important considering the requirement of the meter level precision in UTM operations. The vehicle dynamics are relatively easy to derive and model, however, vehicle control systems remain unknown as they are usually kept by manufactures as a part of intellectual properties. That brings challenges to trajectory modeling for sUAVs. How to model the vehicle's trajectories with unknown control system? This work proposes to use a neural network to model a vehicle's trajectory. The neural network is first trained to learn the vehicle's responses at numerous conditions. Once being fully trained, given current vehicle states, winds, and desired future trajectory, the neural

  16. Macroscopic self-consistent model for external-reflection near-field microscopy

    International Nuclear Information System (INIS)

    Berntsen, S.; Bozhevolnaya, E.; Bozhevolnyi, S.

    1993-01-01

    The self-consistent macroscopic approach based on the Maxwell equations in two-dimensional geometry is developed to describe tip-surface interaction in external-reflection near-field microscopy. The problem is reduced to a single one-dimensional integral equation in terms of the Fourier components of the field at the plane of the sample surface. This equation is extended to take into account a pointlike scatterer placed on the sample surface. The power of light propagating toward the detector as the fiber mode is expressed by using the self-consistent field at the tip surface. Numerical results for trapezium-shaped tips are presented. The authors show that the sharper tip and the more confined fiber mode result in better resolution of the near-field microscope. Moreover, it is found that the tip-surface distance should not be too small so that better resolution is ensured. 14 refs., 10 figs

  17. Using structural equation modeling for network meta-analysis.

    Science.gov (United States)

    Tu, Yu-Kang; Wu, Yun-Chun

    2017-07-14

    Network meta-analysis overcomes the limitations of traditional pair-wise meta-analysis by incorporating all available evidence into a general statistical framework for simultaneous comparisons of several treatments. Currently, network meta-analyses are undertaken either within the Bayesian hierarchical linear models or frequentist generalized linear mixed models. Structural equation modeling (SEM) is a statistical method originally developed for modeling causal relations among observed and latent variables. As random effect is explicitly modeled as a latent variable in SEM, it is very flexible for analysts to specify complex random effect structure and to make linear and nonlinear constraints on parameters. The aim of this article is to show how to undertake a network meta-analysis within the statistical framework of SEM. We used an example dataset to demonstrate the standard fixed and random effect network meta-analysis models can be easily implemented in SEM. It contains results of 26 studies that directly compared three treatment groups A, B and C for prevention of first bleeding in patients with liver cirrhosis. We also showed that a new approach to network meta-analysis based on the technique of unrestricted weighted least squares (UWLS) method can also be undertaken using SEM. For both the fixed and random effect network meta-analysis, SEM yielded similar coefficients and confidence intervals to those reported in the previous literature. The point estimates of two UWLS models were identical to those in the fixed effect model but the confidence intervals were greater. This is consistent with results from the traditional pairwise meta-analyses. Comparing to UWLS model with common variance adjusted factor, UWLS model with unique variance adjusted factor has greater confidence intervals when the heterogeneity was larger in the pairwise comparison. The UWLS model with unique variance adjusted factor reflects the difference in heterogeneity within each comparison

  18. Consistent strategy updating in spatial and non-spatial behavioral experiments does not promote cooperation in social networks.

    Directory of Open Access Journals (Sweden)

    Jelena Grujić

    Full Text Available The presence of costly cooperation between otherwise selfish actors is not trivial. A prominent mechanism that promotes cooperation is spatial population structure. However, recent experiments with human subjects report substantially lower level of cooperation then predicted by theoretical models. We analyze the data of such an experiment in which a total of 400 players play a Prisoner's Dilemma on a 4×4 square lattice in two treatments, either interacting via a fixed square lattice (15 independent groups or with a population structure changing after each interaction (10 independent groups. We analyze the statistics of individual decisions and infer in which way they can be matched with the typical models of evolutionary game theorists. We find no difference in the strategy updating between the two treatments. However, the strategy updates are distinct from the most popular models which lead to the promotion of cooperation as shown by computer simulations of the strategy updating. This suggests that the promotion of cooperation by population structure is not as straightforward in humans as often envisioned in theoretical models.

  19. The Bioenvironmental modeling of Bahar city based on Climate-consistent Architecture

    OpenAIRE

    Parna Kazemian

    2014-01-01

    The identification of the climate of a particularplace and the analysis of the climatic needs in terms of human comfort and theuse of construction materials is one of the prerequisites of aclimate-consistent design. In studies on climate and weather, usingillustrative reports, first a picture of the state of climate is offered. Then,based on the obtained results, the range of changes is determined, and thecause-effect relationships at different scales are identified. Finally, by ageneral exam...

  20. Self-consistent gyrokinetic modeling of neoclassical and turbulent impurity transport

    OpenAIRE

    Estève , D. ,; Sarazin , Y.; Garbet , X.; Grandgirard , V.; Breton , S. ,; Donnel , P. ,; Asahi , Y. ,; Bourdelle , C.; Dif-Pradalier , G; Ehrlacher , C.; Emeriau , C.; Ghendrih , Ph; Gillot , C.; Latu , G.; Passeron , C.

    2018-01-01

    International audience; Trace impurity transport is studied with the flux-driven gyrokinetic GYSELA code [V. Grandgirard et al., Comp. Phys. Commun. 207, 35 (2016)]. A reduced and linearized multi-species collision operator has been recently implemented, so that both neoclassical and turbulent transport channels can be treated self-consistently on an equal footing. In the Pfirsch-Schlüter regime likely relevant for tungsten, the standard expression of the neoclassical impurity flux is shown t...

  1. Self-consistent gyrokinetic modeling of neoclassical and turbulent impurity transport

    Science.gov (United States)

    Estève, D.; Sarazin, Y.; Garbet, X.; Grandgirard, V.; Breton, S.; Donnel, P.; Asahi, Y.; Bourdelle, C.; Dif-Pradalier, G.; Ehrlacher, C.; Emeriau, C.; Ghendrih, Ph.; Gillot, C.; Latu, G.; Passeron, C.

    2018-03-01

    Trace impurity transport is studied with the flux-driven gyrokinetic GYSELA code (Grandgirard et al 2016 Comput. Phys. Commun. 207 35). A reduced and linearized multi-species collision operator has been recently implemented, so that both neoclassical and turbulent transport channels can be treated self-consistently on an equal footing. In the Pfirsch-Schlüter regime that is probably relevant for tungsten, the standard expression for the neoclassical impurity flux is shown to be recovered from gyrokinetics with the employed collision operator. Purely neoclassical simulations of deuterium plasma with trace impurities of helium, carbon and tungsten lead to impurity diffusion coefficients, inward pinch velocities due to density peaking, and thermo-diffusion terms which quantitatively agree with neoclassical predictions and NEO simulations (Belli et al 2012 Plasma Phys. Control. Fusion 54 015015). The thermal screening factor appears to be less than predicted analytically in the Pfirsch-Schlüter regime, which can be detrimental to fusion performance. Finally, self-consistent nonlinear simulations have revealed that the tungsten impurity flux is not the sum of turbulent and neoclassical fluxes computed separately, as is usually assumed. The synergy partly results from the turbulence-driven in-out poloidal asymmetry of tungsten density. This result suggests the need for self-consistent simulations of impurity transport, i.e. including both turbulence and neoclassical physics, in view of quantitative predictions for ITER.

  2. Mapping and modeling of physician collaboration network.

    Science.gov (United States)

    Uddin, Shahadat; Hamra, Jafar; Hossain, Liaquat

    2013-09-10

    Effective provisioning of healthcare services during patient hospitalization requires collaboration involving a set of interdependent complex tasks, which needs to be carried out in a synergistic manner. Improved patients' outcome during and after hospitalization has been attributed to how effective different health services provisioning groups carry out their tasks in a coordinated manner. Previous studies have documented the underlying relationships between collaboration among physicians on the effective outcome in delivering health services for improved patient outcomes. However, there are very few systematic empirical studies with a focus on the effect of collaboration networks among healthcare professionals and patients' medical condition. On the basis of the fact that collaboration evolves among physicians when they visit a common hospitalized patient, in this study, we first propose an approach to map collaboration network among physicians from their visiting information to patients. We termed this network as physician collaboration network (PCN). Then, we use exponential random graph (ERG) models to explore the microlevel network structures of PCNs and their impact on hospitalization cost and hospital readmission rate. ERG models are probabilistic models that are presented by locally determined explanatory variables and can effectively identify structural properties of networks such as PCN. It simplifies a complex structure down to a combination of basic parameters such as 2-star, 3-star, and triangle. By applying our proposed mapping approach and ERG modeling technique to the electronic health insurance claims dataset of a very large Australian health insurance organization, we construct and model PCNs. We notice that the 2-star (subset of 3 nodes in which 1 node is connected to each of the other 2 nodes) parameter of ERG has significant impact on hospitalization cost. Further, we identify that triangle (subset of 3 nodes in which each node is connected to

  3. On Improved Network Models for Rubber Elasticity and Their Applications to Orientation Hardening in Glassy Polymers

    NARCIS (Netherlands)

    Wu, P.D.; Giessen, E. van der

    1993-01-01

    Three-dimensional molecular network theories are studied which use a non-Gaussian statistical mechanics model for the large strain extension of molecules. Invoking an affine deformation assumption, the evolution of the network-consisting of a large number of molecular chains per unit volume, which

  4. CONSISTENT USE OF THE KALMAN FILTER IN CHEMICAL TRANSPORT MODELS (CTMS) FOR DEDUCING EMISSIONS

    Science.gov (United States)

    Past research has shown that emissions can be deduced using observed concentrations of a chemical, a Chemical Transport Model (CTM), and the Kalman filter in an inverse modeling application. An expression was derived for the relationship between the "observable" (i.e., the con...

  5. Modeling In-Network Aggregation in VANETs

    NARCIS (Netherlands)

    Dietzel, Stefan; Kargl, Frank; Heijenk, Geert; Schaub, Florian

    2011-01-01

    The multitude of applications envisioned for vehicular ad hoc networks requires efficient communication and dissemination mechanisms to prevent network congestion. In-network data aggregation promises to reduce bandwidth requirements and enable scalability in large vehicular networks. However, most

  6. Different Epidemic Models on Complex Networks

    International Nuclear Information System (INIS)

    Zhang Haifeng; Small, Michael; Fu Xinchu

    2009-01-01

    Models for diseases spreading are not just limited to SIS or SIR. For instance, for the spreading of AIDS/HIV, the susceptible individuals can be classified into different cases according to their immunity, and similarly, the infected individuals can be sorted into different classes according to their infectivity. Moreover, some diseases may develop through several stages. Many authors have shown that the individuals' relation can be viewed as a complex network. So in this paper, in order to better explain the dynamical behavior of epidemics, we consider different epidemic models on complex networks, and obtain the epidemic threshold for each case. Finally, we present numerical simulations for each case to verify our results.

  7. Modeling wormhole growth and wormhole networks in unconsolidated sand media using the BP CHOPS model

    Energy Technology Data Exchange (ETDEWEB)

    Vanderheyden, W.B. [BP America, Inc. Exploration and Production Technology Unconventional Oil Flagship (United States); Zhang, D. Z.; Jayaraman, B. [Los Alamos National Laboratory Theoretical Division Solid and Fluid Dynamics Group (United States)

    2011-07-01

    Cold Heavy Oil Production with Sand (CHOPS) is a recovery method used in unconsolidated sands to produce heavy oil. During the use of the CHOPS method, wormholes originating from production wells are generated. The aim of this paper is to present 2 modeling tools developed by BP in order to improve reservoir simulation of CHOPS operations with wormholes. The first tool developed is a CHOPS modeling framework representing wormhole networks through reservoir simulation and its wellbore model. The second one consists of the application of advanced fluid-structure interaction modeling into the simulation of wormhole and its network growth. Experiments were carried out and a qualitative agreement was achieved with the model. In addition the BP CHOPS model can predict probable oil production without calibration. This paper presented an improved model for reservoir simulation with wormholes but further work is required to predict wormhole shape in a more accurate manner.

  8. Novel recurrent neural network for modelling biological networks: oscillatory p53 interaction dynamics.

    Science.gov (United States)

    Ling, Hong; Samarasinghe, Sandhya; Kulasiri, Don

    2013-12-01

    Understanding the control of cellular networks consisting of gene and protein interactions and their emergent properties is a central activity of Systems Biology research. For this, continuous, discrete, hybrid, and stochastic methods have been proposed. Currently, the most common approach to modelling accurate temporal dynamics of networks is ordinary differential equations (ODE). However, critical limitations of ODE models are difficulty in kinetic parameter estimation and numerical solution of a large number of equations, making them more suited to smaller systems. In this article, we introduce a novel recurrent artificial neural network (RNN) that addresses above limitations and produces a continuous model that easily estimates parameters from data, can handle a large number of molecular interactions and quantifies temporal dynamics and emergent systems properties. This RNN is based on a system of ODEs representing molecular interactions in a signalling network. Each neuron represents concentration change of one molecule represented by an ODE. Weights of the RNN correspond to kinetic parameters in the system and can be adjusted incrementally during network training. The method is applied to the p53-Mdm2 oscillation system - a crucial component of the DNA damage response pathways activated by a damage signal. Simulation results indicate that the proposed RNN can successfully represent the behaviour of the p53-Mdm2 oscillation system and solve the parameter estimation problem with high accuracy. Furthermore, we presented a modified form of the RNN that estimates parameters and captures systems dynamics from sparse data collected over relatively large time steps. We also investigate the robustness of the p53-Mdm2 system using the trained RNN under various levels of parameter perturbation to gain a greater understanding of the control of the p53-Mdm2 system. Its outcomes on robustness are consistent with the current biological knowledge of this system. As more

  9. Centralized Bayesian reliability modelling with sensor networks

    Czech Academy of Sciences Publication Activity Database

    Dedecius, Kamil; Sečkárová, Vladimíra

    2013-01-01

    Roč. 19, č. 5 (2013), s. 471-482 ISSN 1387-3954 R&D Projects: GA MŠk 7D12004 Grant - others:GA MŠk(CZ) SVV-265315 Keywords : Bayesian modelling * Sensor network * Reliability Subject RIV: BD - Theory of Information Impact factor: 0.984, year: 2013 http://library.utia.cas.cz/separaty/2013/AS/dedecius-0392551.pdf

  10. Modelling Pollutant Dispersion in a Street Network

    Science.gov (United States)

    Salem, N. Ben; Garbero, V.; Salizzoni, P.; Lamaison, G.; Soulhac, L.

    2015-04-01

    This study constitutes a further step in the analysis of the performances of a street network model to simulate atmospheric pollutant dispersion in urban areas. The model, named SIRANE, is based on the decomposition of the urban atmosphere into two sub-domains: the urban boundary layer, whose dynamics is assumed to be well established, and the urban canopy, represented as a series of interconnected boxes. Parametric laws govern the mass exchanges between the boxes under the assumption that the pollutant dispersion within the canopy can be fully simulated by modelling three main bulk transfer phenomena: channelling along street axes, transfers at street intersections, and vertical exchange between street canyons and the overlying atmosphere. Here, we aim to evaluate the reliability of the parametrizations adopted to simulate these phenomena, by focusing on their possible dependence on the external wind direction. To this end, we test the model against concentration measurements within an idealized urban district whose geometrical layout closely matches the street network represented in SIRANE. The analysis is performed for an urban array with a fixed geometry and a varying wind incidence angle. The results show that the model provides generally good results with the reference parametrizations adopted in SIRANE and that its performances are quite robust for a wide range of the model parameters. This proves the reliability of the street network approach in simulating pollutant dispersion in densely built city districts. The results also show that the model performances may be improved by considering a dependence of the wind fluctuations at street intersections and of the vertical exchange velocity on the direction of the incident wind. This opens the way for further investigations to clarify the dependence of these parameters on wind direction and street aspect ratios.

  11. The Channel Network model and field applications

    International Nuclear Information System (INIS)

    Khademi, B.; Moreno, L.; Neretnieks, I.

    1999-01-01

    The Channel Network model describes the fluid flow and solute transport in fractured media. The model is based on field observations, which indicate that flow and transport take place in a three-dimensional network of connected channels. The channels are generated in the model from observed stochastic distributions and solute transport is modeled taking into account advection and rock interactions, such as matrix diffusion and sorption within the rock. The most important site-specific data for the Channel Network model are the conductance distribution of the channels and the flow-wetted surface. The latter is the surface area of the rock in contact with the flowing water. These parameters may be estimated from hydraulic measurements. For the Aespoe site, several borehole data sets are available, where a packer distance of 3 meters was used. Numerical experiments were performed in order to study the uncertainties in the determination of the flow-wetted surface and conductance distribution. Synthetic data were generated along a borehole and hydraulic tests with different packer distances were simulated. The model has previously been used to study the Long-term Pumping and Tracer Test (LPT2) carried out in the Aespoe Hard Rock Laboratory (HRL) in Sweden, where the distance travelled by the tracers was of the order hundreds of meters. Recently, the model has been used to simulate the tracer tests performed in the TRUE experiment at HRL, with travel distance of the order of tens of meters. Several tracer tests with non-sorbing and sorbing species have been performed

  12. Using open sidewalls for modelling self-consistent lithosphere subduction dynamics

    Directory of Open Access Journals (Sweden)

    M. V. Chertova

    2012-10-01

    Full Text Available Subduction modelling in regional model domains, in 2-D or 3-D, is commonly performed using closed (impermeable vertical boundaries. Here we investigate the merits of using open boundaries for 2-D modelling of lithosphere subduction. Our experiments are focused on using open and closed (free slip sidewalls while comparing results for two model aspect ratios of 3:1 and 6:1. Slab buoyancy driven subduction with open boundaries and free plates immediately develops into strong rollback with high trench retreat velocities and predominantly laminar asthenospheric flow. In contrast, free-slip sidewalls prove highly restrictive on subduction rollback evolution, unless the lithosphere plates are allowed to move away from the sidewalls. This initiates return flows pushing both plates toward the subduction zone speeding up subduction. Increasing the aspect ratio to 6:1 does not change the overall flow pattern when using open sidewalls but only the flow magnitude. In contrast, for free-slip boundaries, the slab evolution does change with respect to the 3:1 aspect ratio model and slab evolution does not resemble the evolution obtained with open boundaries using 6:1 aspect ratio. For models with open side boundaries, we could develop a flow-speed scaling based on energy dissipation arguments to convert between flow fields of different model aspect ratios. We have also investigated incorporating the effect of far-field generated lithosphere stress in our open boundary models. By applying realistic normal stress conditions to the strong part of the overriding plate at the sidewalls, we can transfer intraplate stress to influence subduction dynamics varying from slab roll-back, stationary subduction, to advancing subduction. The relative independence of the flow field on model aspect ratio allows for a smaller modelling domain. Open boundaries allow for subduction to evolve freely and avoid the adverse effects (e.g. forced return flows of free-slip boundaries. We

  13. Advances in dynamic network modeling in complex transportation systems

    CERN Document Server

    Ukkusuri, Satish V

    2013-01-01

    This book focuses on the latest in dynamic network modeling, including route guidance and traffic control in transportation systems and other complex infrastructure networks. Covers dynamic traffic assignment, flow modeling, mobile sensor deployment and more.

  14. A simple model of the plasma deflagration gun including self-consistent electric and magnetic fields

    International Nuclear Information System (INIS)

    Enloe, C.L.; Reinovsky, R.E.

    1985-01-01

    At the Air Force Weapons Laboratory, interest has continued for some time in energetic plasma injectors. A possible scheme for such a device is the plasma deflagration gun. When the question arose whether it would be possible to scale a deflagration gun to the multi-megajoule energy level, it became clear that a scaling law which described the fun as a circuit element and allowed one to confidently scale gun parameters would be required. The authors sought to develop a scaling law which self-consistently described the current, magnetic field, and velocity profiles in the gun. They based this scaling law on plasma parameters exclusively, abandoning the fluid approach

  15. Self-consistent Maxwell-Bloch model of quantum-dot photonic-crystal-cavity lasers

    DEFF Research Database (Denmark)

    Cartar, William; Mørk, Jesper; Hughes, Stephen

    2017-01-01

    -level emitters are solved numerically. Phenomenological pure dephasing and incoherent pumping is added to the optical Bloch equations to allow for a dynamical lasing regime, but the cavity-mediated radiative dynamics and gain coupling of each QD dipole (artificial atom) is contained self-consistently within......-mode to multimode lasing is also observed, depending on the spectral peak frequency of the QD ensemble. Using a statistical modal analysis of the average decay rates, we also show how the average radiative decay rate decreases as a function of cavity size. In addition, we investigate the role of structural disorder...

  16. Implicit implementation and consistent tangent modulus of a viscoplastic model for polymers

    OpenAIRE

    ACHOUR-RENAULT, Nadia; CHATZIGEORGIOU, George; MERAGHNI, Fodil; CHEMISKY, Yves; FITOUSSI, Joseph

    2015-01-01

    In this work, the phenomenological viscoplastic DSGZ model[Duan, Y., Saigal, A., Greif, R., Zimmerman, M. A., 2001. A Uniform Phenomenological Constitutive Model for Glassy and Semicrystalline Polymers. Polymer Engineering and Science 41 (8), 1322-1328], developed for glassy or semi-crystalline polymers, is numerically implemented in a three dimensional framework, following an implicit formulation. The computational methodology is based on the radial return mapping algorithm. This implicit fo...

  17. Self-Consistent Model of Magnetospheric Electric Field, Ring Current, Plasmasphere, and Electromagnetic Ion Cyclotron Waves: Initial Results

    Science.gov (United States)

    Gamayunov, K. V.; Khazanov, G. V.; Liemohn, M. W.; Fok, M.-C.; Ridley, A. J.

    2009-01-01

    Further development of our self-consistent model of interacting ring current (RC) ions and electromagnetic ion cyclotron (EMIC) waves is presented. This model incorporates large scale magnetosphere-ionosphere coupling and treats self-consistently not only EMIC waves and RC ions, but also the magnetospheric electric field, RC, and plasmasphere. Initial simulations indicate that the region beyond geostationary orbit should be included in the simulation of the magnetosphere-ionosphere coupling. Additionally, a self-consistent description, based on first principles, of the ionospheric conductance is required. These initial simulations further show that in order to model the EMIC wave distribution and wave spectral properties accurately, the plasmasphere should also be simulated self-consistently, since its fine structure requires as much care as that of the RC. Finally, an effect of the finite time needed to reestablish a new potential pattern throughout the ionosphere and to communicate between the ionosphere and the equatorial magnetosphere cannot be ignored.

  18. Distributed Bayesian Networks for User Modeling

    DEFF Research Database (Denmark)

    Tedesco, Roberto; Dolog, Peter; Nejdl, Wolfgang

    2006-01-01

    The World Wide Web is a popular platform for providing eLearning applications to a wide spectrum of users. However – as users differ in their preferences, background, requirements, and goals – applications should provide personalization mechanisms. In the Web context, user models used by such ada......The World Wide Web is a popular platform for providing eLearning applications to a wide spectrum of users. However – as users differ in their preferences, background, requirements, and goals – applications should provide personalization mechanisms. In the Web context, user models used...... by such adaptive applications are often partial fragments of an overall user model. The fragments have then to be collected and merged into a global user profile. In this paper we investigate and present algorithms able to cope with distributed, fragmented user models – based on Bayesian Networks – in the context...... of Web-based eLearning platforms. The scenario we are tackling assumes learners who use several systems over time, which are able to create partial Bayesian Networks for user models based on the local system context. In particular, we focus on how to merge these partial user models. Our merge mechanism...

  19. A simplified memory network model based on pattern formations

    Science.gov (United States)

    Xu, Kesheng; Zhang, Xiyun; Wang, Chaoqing; Liu, Zonghua

    2014-12-01

    Many experiments have evidenced the transition with different time scales from short-term memory (STM) to long-term memory (LTM) in mammalian brains, while its theoretical understanding is still under debate. To understand its underlying mechanism, it has recently been shown that it is possible to have a long-period rhythmic synchronous firing in a scale-free network, provided the existence of both the high-degree hubs and the loops formed by low-degree nodes. We here present a simplified memory network model to show that the self-sustained synchronous firing can be observed even without these two necessary conditions. This simplified network consists of two loops of coupled excitable neurons with different synaptic conductance and with one node being the sensory neuron to receive an external stimulus signal. This model can be further used to show how the diversity of firing patterns can be selectively formed by varying the signal frequency, duration of the stimulus and network topology, which corresponds to the patterns of STM and LTM with different time scales. A theoretical analysis is presented to explain the underlying mechanism of firing patterns.

  20. Analyzing, Modeling, and Simulation for Human Dynamics in Social Network

    Directory of Open Access Journals (Sweden)

    Yunpeng Xiao

    2012-01-01

    Full Text Available This paper studies the human behavior in the top-one social network system in China (Sina Microblog system. By analyzing real-life data at a large scale, we find that the message releasing interval (intermessage time obeys power law distribution both at individual level and at group level. Statistical analysis also reveals that human behavior in social network is mainly driven by four basic elements: social pressure, social identity, social participation, and social relation between individuals. Empirical results present the four elements' impact on the human behavior and the relation between these elements. To further understand the mechanism of such dynamic phenomena, a hybrid human dynamic model which combines “interest” of individual and “interaction” among people is introduced, incorporating the four elements simultaneously. To provide a solid evaluation, we simulate both two-agent and multiagent interactions with real-life social network topology. We achieve the consistent results between empirical studies and the simulations. The model can provide a good understanding of human dynamics in social network.